id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_4900
A: You might want to give RESTClient a try. A: I used Fiddler2 which solved my problems. Appreciate your help. A: I don't know tcpmon, but as far as I know, if the communication is on the same machine, you can't capture it using a sniffer (like WireShark). I once used the wsi-testing-tools as a cheap solution for setting up a man-in-the-middle. You configure the wsi-testing-tools' monitor to listen on a certain port, and direct all your WS calls to this port. The monitor records all the requests and responses, and creates a nice formatted output that can be viewed in any browser. (Please note that this is probably by far not the best solution. I had to use it due to draconian limitations on my tools selection, but it works)
doc_4901
// HTML Form parser middleware for dealing with file uploads router.post("*", (req: Request, res: Response, next: NextFunction) => { let busboy = new Busboy({ headers: req.headers }); busboy.on("file", (fieldname, file, filename, encoding, mimetype) => { file.on("end", () => { console.log("File [" + fieldname + "] Finished"); }); // number of CSV parameters to be found by splitting first line let paramsLen: number; // first line varible. Outside data callback incase first line is split over multiple data chunks let firstLine = ""; // line split regex. works from new line and EOF const lineSplitReg: RegExp = /[\n\Z]/; return new Promise((f, r) => { file.on("data", data => { console.log("File [" + fieldname + "] got " + data.length + " bytes"); if (!paramsLen) { let strChunk = data.toString(); if (lineSplitReg.test(strChunk)) { firstLine += strChunk.split(lineSplitReg)[0]; paramsLen = firstLine.split(",").length; // paramsLen now found! init pipe to csv writeable f(); } else { // long line. contiune reading in next data chunk firstLine += strChunk; } } }); }) .then(() => { let headers: string[] = [ "id", "brand", "product", "serialNumber", "site", "area", "location", "longitude", "latitude", ]; // add extra config headers once paramsLen has been discovered let cNum = 1; for (let i = headers.length; i < paramsLen; i = i + 2) { headers.push(`c${cNum}`); headers.push(`v${cNum}`); cNum++; } file.pipe( csv({ headers, }), ); }) }); busboy.on("finish", () => { console.log("Done parsing form!"); if (!importingDevicesFromCsv) { fulfill(); } }); req.pipe(busboy); }) The problem is that by the time the promise is fulfilled the file readable stream has already consumed some or all of the file data which means those chunks never get passed to the csv readable stream. So how can i read the stream data but not consume it till the pipe to the csv parser is established given that we may have to read over multiple data chunks before hand? A: My solution was to create a promise that wrapped a transform stream that read data but didn't consume it and held the data in a array (including release callback). When paramsLen was discovered the promise was fulfilled with the transform object then the pipe was established and finally the withheld data in the tranform stream was drained. See below: // HTML Form parser middleware for dealing with file uploads router.post("*", (req: Request, res: Response, next: NextFunction) => { let busboy = new Busboy({ headers: req.headers }); busboy.on("file", (fieldname, file, filename, encoding, mimetype) => { file.on("end", () => { console.log("File [" + fieldname + "] Finished"); }); file.on("data", data => { console.log("File [" + fieldname + "] got " + data.length + " bytes"); }); return new Promise((f, r) => { let ts: { dataArray: Array<[Buffer, Function]>; paramsLen: number; firstLine: string; lineSplitReg: RegExp; stream: Transform; drainDone: boolean; drain(): void; } = { dataArray: [], paramsLen: undefined, firstLine: "", lineSplitReg: /[\n\Z]/, drainDone: false, drain: () => { ts.dataArray.forEach(x => { x[1](null, x[0]); }); ts.drainDone = true; }, stream: new Transform({ transform: (data: Buffer, enc, callback: Function) => { // if drain finished pass data straight through if (ts.drainDone) { return callback(null, data); } ts.dataArray.push([data, callback]); if (!ts.paramsLen) { let strChunk = data.toString(); if (ts.lineSplitReg.test(strChunk)) { ts.firstLine += strChunk.split(ts.lineSplitReg)[0]; ts.paramsLen = ts.firstLine.split(",").length; f(ts); } else { // long line. contiune reading in next data chunk ts.firstLine += strChunk; } } }, }), }; file.pipe(ts); }) .then(ts => { let headers: string[] = [ "id", "brand", "product", "serialNumber", "site", "area", "location", "longitude", "latitude", ]; // add extra config headers once paramsLen has been discovered let cNum = 1; for (let i = headers.length; i < paramsLen; i = i + 2) { headers.push(`c${cNum}`); headers.push(`v${cNum}`); cNum++; } ts.stream.pipe( csv({ headers, }), ); // drain transform stream ts.drain(); }) }); busboy.on("finish", () => { console.log("Done parsing form!"); if (!importingDevicesFromCsv) { fulfill(); } }); req.pipe(busboy); })
doc_4902
const {map, items} = props; const [infoWindow, setInfoWindow] = useState(null); const [renderedItems, setRenderedItems] = useState([]); useEffect(() => { const open = (marker, content) => { infoWindow.close(); infoWindow.setContent(content) infoWindow.open(map, marker); } if(map && items){ renderedItems.forEach(e => e.setMap(null)); const newRender = []; items.forEach(e => { const newMarker = new window.google.maps.Marker({ position: e.location }); if(e.content){ newMarker.addListener("click", () => open(newMarker, e.content)); } newRender.push(newMarker); newMarker.setMap(map); }); setRenderedItems(newRender); } }, [map, items, infoWindow]); i keep having the react warning that renderedItems should be in the dependency. if i do that, this renders without end, but i cant take this out of here. cause i need to save the reference of this new created markers A: it's normal that the warning pops up, it will check for every variable/function inside your useEffect, if u r certain that u don't need to trigger it when renderedItems change u can disable it: useEffect(() => { const open = (marker, content) => { infoWindow.close(); infoWindow.setContent(content) infoWindow.open(map, marker); } if(map && items){ renderedItems.forEach(e => e.setMap(null)); const newRender = []; items.forEach(e => { const newMarker = new window.google.maps.Marker({ position: e.location }); if(e.content){ newMarker.addListener("click", () => open(newMarker, e.content)); } newRender.push(newMarker); newMarker.setMap(map); }); setRenderedItems(newRender); } // eslint-disable-next-line react-hooks/exhaustive-deps }, [map, items, infoWindow]);
doc_4903
jQuery(function($) { var CHAKRA = window.CHAKRA || {}; /* ================================================== Mobile Navigation ================================================== */ /* Clone Menu for use later */ var mobileMenuClone = $('#menu').clone().attr('id', 'navigation-mobile'); CHAKRA.mobileNav = function() { var windowWidth = $(window).width(); // Show Menu or Hide the Menu if (windowWidth <= 979) { if ($('#mobile-nav').length > 0) { mobileMenuClone.insertAfter('#menu'); $('#navigation-mobile #menu-nav').attr('id', 'menu-nav-mobile'); } } else { $('#navigation-mobile').css('display', 'none'); if ($('#mobile-nav').hasClass('open')) { $('#mobile-nav').removeClass('open'); } } } // Call the Event for Menu CHAKRA.listenerMenu = function() { $('#mobile-nav').on('click', function(e) { $(this).toggleClass('open'); $('#navigation-mobile').stop().slideToggle(350, 'easeOutExpo'); e.preventDefault(); }); $('#menu-nav-mobile a').on('click', function() { $('#mobile-nav').removeClass('open'); $('#navigation-mobile').slideUp(350, 'easeOutExpo'); }); } /* ================================================== Slider Options ================================================== */ CHAKRA.slider = function() { $.supersized({ // Functionality slideshow: 1, // Slideshow on/off autoplay: 1, // Slideshow starts playing automatically start_slide: 1, // Start slide (0 is random) stop_loop: 0, // Pauses slideshow on last slide random: 0, // Randomize slide order (Ignores start slide) slide_interval: 12000, // Length between transitions transition: 2, // 0-None, 1-Fade, 2-Slide Top, 3-Slide Right, 4-Slide Bottom, 5-Slide Left, 6-Carousel Right, 7-Carousel Left transition_speed: 300, // Speed of transition new_window: 1, // Image links open in new window/tab pause_hover: 0, // Pause slideshow on hover keyboard_nav: 1, // Keyboard navigation on/off performance: 1, // 0-Normal, 1-Hybrid speed/quality, 2-Optimizes image quality, 3-Optimizes transition speed // (Only works for Firefox/IE, not Webkit) image_protect: 1, // Disables image dragging and right click with Javascript // Size & Position min_width: 0, // Min width allowed (in pixels) min_height: 0, // Min height allowed (in pixels) vertical_center: 1, // Vertically center background horizontal_center: 1, // Horizontally center background fit_always: 0, // Image will never exceed browser width or height (Ignores min. dimensions) fit_portrait: 1, // Portrait images will not exceed browser height fit_landscape: 0, // Landscape images will not exceed browser width // Components slide_links: 'blank', // Individual links for each slide (Options: false, 'num', 'name', 'blank') thumb_links: 0, // Individual thumb links for each slide thumbnail_navigation: 0, // Thumbnail navigation slides: [ // Slideshow Images { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image01.jpg', title: '<div class="slide-content">Chakra</div>', thumb: '', url: '' }, { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image02.jpg', title: '<div class="slide-content">Responsive Design</div>', thumb: '', url: '' }, { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image03.jpg', title: '<div class="slide-content">FullScreen Gallery</div>', thumb: '', url: '' }, { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image04.jpg', title: '<div class="slide-content">Showcase Your Work</div>', thumb: '', url: '' } ], // Theme Options progress_bar: 0, // Timer for each slide mouse_scrub: 0 }); } /* ================================================== Navigation Fix ================================================== */ CHAKRA.nav = function() { $('.sticky-nav').waypoint('sticky'); } /* ================================================== Filter Works ================================================== */ CHAKRA.filter = function() { if ($('#projects').length > 0) { var $container = $('#projects'); $container.imagesLoaded(function() { $container.isotope({ // options animationEngine: 'best-available', itemSelector: '.item-thumbs', layoutMode: 'fitRows' }); }); // filter items when filter link is clicked var $optionSets = $('#options .option-set'), $optionLinks = $optionSets.find('a'); $optionLinks.click(function() { var $this = $(this); // don't proceed if already selected if ($this.hasClass('selected')) { return false; } var $optionSet = $this.parents('.option-set'); $optionSet.find('.selected').removeClass('selected'); $this.addClass('selected'); // make option object dynamically, i.e. { filter: '.my-filter-class' } var options = {}, key = $optionSet.attr('data-option-key'), value = $this.attr('data-option-value'); // parse 'false' as false boolean value = value === 'false' ? false : value; options[key] = value; if (key === 'layoutMode' && typeof changeLayoutMode === 'function') { // changes in layout modes need extra logic changeLayoutMode($this, options) } else { // otherwise, apply new options $container.isotope(options); } return false; }); } } /* ================================================== FancyBox ================================================== */ CHAKRA.fancyBox = function() { if ($('.fancybox').length > 0 || $('.fancybox-media').length > 0 || $('.fancybox-various').length > 0) { $(".fancybox").fancybox({ padding: 0, beforeShow: function() { this.title = $(this.element).attr('title'); this.title = '<h4>' + this.title + '</h4>' + '<p>' + $(this.element).parent().find('img').attr('alt') + '</p>'; }, helpers: { title: { type: 'inside' }, } }); $('.fancybox-media').fancybox({ openEffect: 'none', closeEffect: 'none', helpers: { media: {} } }); $(".fancybox-various").fancybox({ maxWidth: 800, maxHeight: 600, fitToView: false, width: '70%', height: '70%', autoSize: false, closeClick: false, openEffect: 'none', closeEffect: 'none' }); } } /* ================================================== Contact Form ================================================== */ CHAKRA.contactForm = function() { $("#contact-submit").on('click', function() { $contact_form = $('#contact-form'); var fields = $contact_form.serialize(); $.ajax({ type: "POST", url: "_include/php/contact.php", data: fields, dataType: 'json', success: function(response) { if (response.status) { $('#contact-form input').val(''); $('#contact-form textarea').val(''); } $('#response').empty().html(response.html); } }); return false; }); } /* ================================================== Twitter Feed ================================================== */ CHAKRA.tweetFeed = function() { var valueTop = -64; // Margin Top Value $("#ticker").tweet({ username: "Bluxart", // Change this with YOUR ID page: 1, avatar_size: 0, count: 10, template: "{text}{time}", filter: function(t) { return !/^@\w+/.test(t.tweet_raw_text); }, loading_text: "loading ..." }).bind("loaded", function() { var ul = $(this).find(".tweet_list"); var ticker = function() { setTimeout(function() { ul.find('li:first').animate({ marginTop: valueTop + 'px' }, 500, 'linear', function() { $(this).detach().appendTo(ul).removeAttr('style'); }); ticker(); }, 5000); }; ticker(); }); } /* ================================================== Menu Highlight ================================================== */ CHAKRA.menu = function() { $('#menu-nav, #menu-nav-mobile').onePageNav({ currentClass: 'current', changeHash: false, scrollSpeed: 750, scrollOffset: 30, scrollThreshold: 0.5, easing: 'easeOutExpo', filter: ':not(.external)' }); } /* ================================================== Next Section ================================================== */ CHAKRA.goSection = function() { $('#nextsection').on('click', function() { $target = $($(this).attr('href')).offset().top - 30; $('body, html').animate({ scrollTop: $target }, 750, 'easeOutExpo'); return false; }); } /* ================================================== GoUp ================================================== */ CHAKRA.goUp = function() { $('#goUp').on('click', function() { $target = $($(this).attr('href')).offset().top - 30; $('body, html').animate({ scrollTop: $target }, 750, 'easeOutExpo'); return false; }); } /* ================================================== Scroll to Top ================================================== */ CHAKRA.scrollToTop = function() { var windowWidth = $(window).width(), didScroll = false; var $arrow = $('#back-to-top'); $arrow.click(function(e) { $('body,html').animate({ scrollTop: "0" }, 750, 'easeOutExpo'); e.preventDefault(); }) $(window).scroll(function() { didScroll = true; }); setInterval(function() { if (didScroll) { didScroll = false; if ($(window).scrollTop() > 1000) { $arrow.css('display', 'block'); } else { $arrow.css('display', 'none'); } } }, 250); } /* ================================================== Thumbs / Social Effects ================================================== */ // Fix Hover on Touch Devices CHAKRA.utils = function() { $('.item-thumbs').bind('touchstart', function() { $(".active").removeClass("active"); $(this).addClass('active'); }); } /* ================================================== Accordion ================================================== */ CHAKRA.accordion = function() { var accordion_trigger = $('.accordion-heading.accordionize'); accordion_trigger.delegate('.accordion-toggle', 'click', function(event) { if ($(this).hasClass('active')) { $(this).removeClass('active'); $(this).addClass('inactive'); } else { accordion_trigger.find('.active').addClass('inactive'); accordion_trigger.find('.active').removeClass('active'); $(this).removeClass('inactive'); $(this).addClass('active'); } event.preventDefault(); }); } /* ================================================== Toggle ================================================== */ CHAKRA.toggle = function() { var accordion_trigger_toggle = $('.accordion-heading.togglize'); accordion_trigger_toggle.delegate('.accordion-toggle', 'click', function(event) { if ($(this).hasClass('active')) { $(this).removeClass('active'); $(this).addClass('inactive'); } else { $(this).removeClass('inactive'); $(this).addClass('active'); } event.preventDefault(); }); } /* ================================================== Tooltip ================================================== */ CHAKRA.toolTip = function() { $('a[data-toggle=tooltip]').tooltip(); } /* ================================================== Map ================================================== */ CHAKRA.map = function() { if ($('.map').length > 0) { $('.map').each(function(i, e) { $map = $(e); $map_id = $map.attr('id'); $map_lat = $map.attr('data-mapLat'); $map_lon = $map.attr('data-mapLon'); $map_zoom = parseInt($map.attr('data-mapZoom')); $map_title = $map.attr('data-mapTitle'); var latlng = new google.maps.LatLng($map_lat, $map_lon); var options = { scrollwheel: false, draggable: false, zoomControl: false, disableDoubleClickZoom: false, disableDefaultUI: true, zoom: $map_zoom, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; var styles = [{ stylers: [{ hue: "#2F3238" }, { saturation: -20 }] }, { featureType: "road", elementType: "geometry", stylers: [{ lightness: 100 }, { visibility: "simplified" }] }, { featureType: "road", elementType: "labels", stylers: [{ visibility: "off" }] }]; var styledMap = new google.maps.StyledMapType(styles, { name: "Styled Map" }); var map = new google.maps.Map(document.getElementById($map_id), options); var image = '_include/img/marker.png'; var marker = new google.maps.Marker({ position: latlng, map: map, title: $map_title, icon: image }); map.mapTypes.set('map_style', styledMap); map.setMapTypeId('map_style'); var contentString = '<p><strong>Company Name</strong><br>Address here</p>'; var infowindow = new google.maps.InfoWindow({ content: contentString }); google.maps.event.addListener(marker, 'click', function() { infowindow.open(map, marker); }); }); } } /* ================================================== Init ================================================== */ CHAKRA.slider(); $(document).ready(function() { // Call placeholder.js to enable Placeholder Property for IE9 Modernizr.load([{ test: Modernizr.input.placeholder, nope: '_include/js/placeholder.js', complete: function() { if (!Modernizr.input.placeholder) { Placeholders.init({ live: true, hideOnFocus: false, className: "yourClass", textColor: "#999" }); } } }]); // Preload the page with jPreLoader $('body').jpreLoader({ splashID: "#jSplash", showSplash: true, showPercentage: true, autoClose: true }); CHAKRA.nav(); CHAKRA.mobileNav(); CHAKRA.listenerMenu(); CHAKRA.menu(); CHAKRA.goSection(); CHAKRA.goUp(); CHAKRA.filter(); CHAKRA.fancyBox(); CHAKRA.contactForm(); CHAKRA.tweetFeed(); CHAKRA.scrollToTop(); CHAKRA.utils(); CHAKRA.accordion(); CHAKRA.toggle(); CHAKRA.toolTip(); CHAKRA.map(); }); $(window).resize(function() { CHAKRA.mobileNav(); }); }); Now here is my php // Hook into the 'wp_enqueue_scripts' action add_action( 'wp_enqueue_scripts', 'main_scripts' ); function main_scripts(){ // Default JS (Use wp_localize_script to pass in php) wp_deregister_script( 'main' ); wp_register_script( 'main', trailingslashit( THEME_URI ) .'_include/js/main.js', false, '1.0', true ); wp_enqueue_script( 'main' ); } How do I change the part of the script where the image is if I have the following advance custom fields <?php the_field('image1') ?> <?php the_field('image2') ?> <?php the_field('image3') ?> <?php the_field('image4') ?> <?php the_field('caption1') ?> <?php the_field('caption2') ?> <?php the_field('caption3') ?> <?php the_field('caption4') ?> The Javascript that I am trying to change is: slides: [ // Slideshow Images { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image01.jpg', title: '<div class="slide-content">Chakra</div>', thumb: '', url: '' }, { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image02.jpg', title: '<div class="slide-content">Responsive Design</div>', thumb: '', url: '' }, { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image03.jpg', title: '<div class="slide-content">FullScreen Gallery</div>', thumb: '', url: '' }, { image: 'http://fatcatmediahouse.com/theoneandonly2014theme/wp-content/themes/fatcat/_include/img/slider-images/image04.jpg', title: '<div class="slide-content">Showcase Your Work</div>', thumb: '', url: '' } ], A: wp_localize_script will not actually change the values already in a script. Rather, it enables you to set global Javascript variables, which can then be utilized by a script. See below. PHP $images = array( array("image" => 'slider-images/image01.jpg', "title" => "Chakra"), array("image" => 'slider-images/image02.jpg', "title" => "Another title") ); wp_register_script( 'main', trailingslashit( THEME_URI ) .'_include/js/main.js', false, '1.0', true ); wp_localize_script( 'main', 'My_Slide_Images', $images ); wp_enqueue_script( 'main' ); In the JS file slides: window.My_Slide_Images
doc_4904
A: You don’t have to use the body element to add Microdata. You may add Microdata attributes to all HTML5 elements. <body> <article itemscope itemtype="http://schema.org/BlogPosting"> <!-- … --> </article> </body> If your software doesn’t allow to add Microdata attributes at all, you could consider using JSON-LD instead. You only have to add a script element with type="application/ld+json". <script type="application/ld+json"> { "@context": "http://schema.org", "@type": "BlogPosting" } </script>
doc_4905
On Windows there is a distribution compile system called IncrediBuild, which has a nice visualization. Any chance to find something similar on Mac? If not, any ideas what I could do to identify which translation units or dependencies during a compile take too much time? A: I am trying to identify which translation unit in my Xcode C++ project takes too much time. Check Xcode's Report Navigator. You can select the report for a given build and see a list of all the build steps and the time taken for each one, so you can see which one(s) take the most time. It's not a visualization, but it does give you the information you need to find the files that take a long time to compile, link, etc. Select the Report Navigator, and then click on the particular build that you want to look at. You'll see a list of the parts of the build, like "Prepare to build" and "Build MyProject", with disclosure triangles next to each. Click the disclosure triangle for the "Build MyProject" (it'll obviously be named with the name of the actual target, not "MyProject"), and you'll see a list of the individual build steps and times.
doc_4906
<div id = "adsdiv">This ad will close in <span id = "closingtimer">10</span> seconds </div> <script type="text/javascript"> function closeMyAd() { document.getElementById("adsdiv").style.display = "none" ; } var seconds = 10; function display() { seconds --; if (seconds < 1) { closeMyAd(); } else { document.getElementById( "closingtimer" ).innerHTML = seconds ; setTimeout("display()", 1000); } } display(); </script> This code shows just 10 seconds and close. A: I would make it so that you have a timer variable that is constantly growing. It is keeping track of the current time in seconds. Every time it updates (once a second) you divide it by ten and then apply the modulo operator to the result to see if it is even or odd. Then, you have a looping if statement that, say, hides it if it's odd and shows it if it's even. That would make it flash once every ten seconds. A: You can try it.Hope it help you. var hidden = true; function close() { document.getElementById("adsdiv").style.display = "none" ; } function display() { document.getElementById("adsdiv").style.display = "block" ; } setInterval(function(){if(hidden) {display(); hidden = false; }else{ close(); hidden = true; }}, 10000);
doc_4907
table field in database: ___________ |NUMERIC_COL| ------------- | 0.00 | ------------- The command used to create external table: nzsql -u -pw -db -c "CREATE EXTERNAL TABLE 'file' USING (IGNOREZERO false) AS SELECT numeric_col FROM table;" output: 0 Now, if I use nzsql to select the same field nzsql -u -pw -db -c "SELECT numeric_col FROM table;" output: 0.00 Is there a flag/command I could use to save the decimals in the external table. Thanks!
doc_4908
Code: void main() { runApp(App()); } class App extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( debugShowCheckedModeBanner: false, title: 'app', theme: ThemeData( primarySwatch: Colors.blue, visualDensity: VisualDensity.adaptivePlatformDensity, ), home: AppNavigation(), ); } } class AppNavigation extends StatefulWidget { @override _AppNavigationState createState() => _AppNavigationState(); } class _AppNavigationState extends State<AppNavigation> { int _currentIndex = 0; final List<Widget> _children = [ HomeScreen(), SettingsScreen(), ]; void onTappedBar(int index) { setState(() { _currentIndex = index; }); } @override Widget build(BuildContext context) { return Scaffold( body: _children[_currentIndex], bottomNavigationBar: BottomNavigationBar( onTap: onTappedBar, currentIndex: _currentIndex, items: <BottomNavigationBarItem>[ BottomNavigationBarItem( icon: Icon(Icons.home), title: Text('Home')), BottomNavigationBarItem( icon: Icon(Icons.settings), title: Text('Settings')), ]), ); } } class HomeScreen extends StatelessWidget { @override Widget build(BuildContext context) { var size = MediaQuery.of(context).size; // gives device width and height return Scaffold( floatingActionButton: FloatingActionButton( onPressed: () { showBottomSheet( context: context, builder: (context) => Container( height: 320, decoration: BoxDecoration( boxShadow: [ BoxShadow( color: Colors.grey.withOpacity(0.5), spreadRadius: 5, blurRadius: 20, offset: Offset(0, 3), ), ], color: Colors.white, borderRadius: BorderRadius.only( topLeft: Radius.circular(25), topRight: Radius.circular(25), ), ), padding: EdgeInsets.symmetric(horizontal: 20, vertical: 30), child: Center(child: Text('Bottom action sheet')), )); }, child: Icon(Icons.add), backgroundColor: Colors.deepPurple), body: Center(child: Text("home page"))); } } Below is the output of above code.The bottom action sheet appears above the bottom navigation bar. I expect the bottom action should be on bottom of the screen. A: I believe what you are trying to achieve is done by using "showModalBottomSheet" like this: return Scaffold( resizeToAvoidBottomInset: false, floatingActionButton: FloatingActionButton( onPressed: () { // what you asked for showModalBottomSheet( barrierColor: Colors.white.withOpacity(0), shape: RoundedRectangleBorder( borderRadius: BorderRadius.vertical( top: Radius.circular(25), ), ), context: context, builder: (context) => Container( height: 320, decoration: BoxDecoration( boxShadow: [ BoxShadow( color: Colors.grey.withOpacity(0.5), spreadRadius: 5, blurRadius: 20, offset: Offset(0, 3), ), ], color: Colors.white, borderRadius: BorderRadius.only( topLeft: Radius.circular(25), topRight: Radius.circular(25), ), ), padding: EdgeInsets.symmetric(horizontal: 20, vertical: 30), child: Center(child: Text('Bottom action sheet')), )); }, child: Icon(Icons.add), backgroundColor: Colors.deepPurple), body: Center(child: Text("home page"))); edit: I have modified the code so that it has the same shadow effect like the one in the picture you've posted
doc_4909
A: Create a method in your controller that returns an array of sort descriptors, something like this... - (NSArray*)sortDescriptorsForPopupFoo { // Create and return array of NSSortDescriptor } In IB, click on the controller, then go to the bindings inspector, and you can bind the method to the "Sort Descriptors" bindings for the array controller.
doc_4910
W: 无法下载 http://toolbelt.heroku.com/ubuntu/./en 不能连接到 toolbelt.heroku.com:http: [IP: 107.22.234.17 80] E: Some index files failed to download. They have been ignored, or old ones used instead. 正在读取软件包列表... 完成 正在分析软件包的依赖关系树 正在读取状态信息... 完成 E: 未发现软件包 heroku-toolbelt houxianxu@houxianxu-N80Vn:~$ A: Site toolbelt.heroku.com is blocked in China, you should use a solution for GFW issues to access it like this.
doc_4911
My question: How to display the coordinates of chosen data (by clicking on it) from a matplotlib plot in my GUI based on PyQT (in that case in my label lbl)? Also, it would be nice to highlight the chosen data point in the plot. Here is my code (working): import numpy as np import matplotlib.pyplot as plt from PyQt4 import QtGui import sys from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar import matplotlib.pyplot as plt class Window(QtGui.QDialog): def __init__(self, parent=None): super(Window, self).__init__(parent) self.initUI() def initUI(self): self.msg = '0' # a figure instance to plot on self.figure = plt.figure() self.canvas = FigureCanvas(self.figure) self.toolbar = NavigationToolbar(self.canvas, self) # a label self.lbl = QtGui.QLabel(self.msg) # set the layout layout = QtGui.QVBoxLayout() layout.addWidget(self.toolbar) layout.addWidget(self.canvas) layout.addWidget(self.lbl) self.setLayout(layout) self.plot() def plot(self): # random data data = [np.random.random() for i in range(10)] # create an axis ax = self.figure.add_subplot(111) # discards the old graph ax.hold(False) # plot data line, = ax.plot(data, 'o', picker=5) # 5 points tolerance self.canvas.draw() self.canvas.mpl_connect('pick_event', Window.onpick) def onpick(self): thisline = self.artist xdata = thisline.get_xdata() ydata = thisline.get_ydata() ind = self.ind # show data self.msg = (xdata[ind], ydata[ind]) print(self.msg) # This does not work: #Window.lbl.setText(self.msg) if __name__ == '__main__': app = QtGui.QApplication(sys.argv) main = Window() main.show() sys.exit(app.exec_()) A: The self is being overlapped by the picker (not sure why). In any case this should work: import numpy as np import matplotlib.pyplot as plt from PyQt4 import QtGui import sys from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar import matplotlib.pyplot as plt class Window(QtGui.QDialog): def __init__(self, parent=None): super(Window, self).__init__(parent) self.initUI() def initUI(self): self.msg = '0' # a figure instance to plot on self.figure = plt.figure() self.canvas = FigureCanvas(self.figure) self.toolbar = NavigationToolbar(self.canvas, self) # a label self.lbl = QtGui.QLabel(self.msg) # set the layout layout = QtGui.QVBoxLayout() layout.addWidget(self.toolbar) layout.addWidget(self.canvas) layout.addWidget(self.lbl) self.setLayout(layout) self.plot() def changelabel(arg): main.lbl.setText(str(arg[0])+' '+str(arg[1])) def plot(self): # random data data = [np.random.random() for i in range(10)] # create an axis ax = self.figure.add_subplot(111) # discards the old graph ax.hold(False) # plot data line, = ax.plot(data, 'o', picker=5) # 5 points tolerance self.canvas.draw() self.canvas.mpl_connect('pick_event', Window.onpick) def onpick(self): thisline = self.artist xdata = thisline.get_xdata() ydata = thisline.get_ydata() ind = self.ind # show data self.msg = (xdata[ind], ydata[ind]) print(self.msg) # Window.changelabel(self.msg) main.lbl.setText(str(self.msg[0])+' '+str(self.msg[1])) if __name__ == '__main__': app = QtGui.QApplication(sys.argv) main = Window() main.show() sys.exit(app.exec_()) , the change is in the setText function, since I call it directly from the variable (no self or Window). main.lbl.setText(str(self.msg[0])+' '+str(self.msg[1]))
doc_4912
How many married women over age 50 embarked in Cherbourg? Note: 'first' is a function in Pandas, so 'titanic.first' will generate an error; use 'titanic['first'] instead. This is the data that is being used: https://docs.google.com/spreadsheets/d/1GhwOG6sH2JkNAxB664T7nmrob1aPYKlcVfKlTeXmzCw/edit?usp=sharing I have come up with this so far, but keep getting syntax errors: criteria = titanic['first']str.contains('Mrs.')&(titanic.age > 50)&(titanic.embarked.str.contains('Cher')] number = criteria.last.count() print number A: Multiple syntax errors here plus last is also a built-in function: criteria = df.loc[(df['first'].str.startswith('Mrs.')) & (df['age'] > 50.0) & (df['embarked'] == 'Cherbourg')] number = criteria['last'].count()
doc_4913
Array ( [1]=>123,456,789,3255 [2]=>585,478,437,1237 ) Search Text = 12 output I want -> 123,1237 What way should I go? $array = array(); array_push($array,'1234',534,75,746); array_push($array,'164',574,752,755); array_push($array,'154',58,754,76); $search_text = '75'; I want Output = 75,752,755,754 A: You can do this using strpos and a loop. $numbers = array(); array_push($numbers,'1234', 534, 75, 746); array_push($numbers,'164', 574, 752, 755); array_push($numbers,'154', 58, 754, 76); $searchNumber = '75'; $output = []; foreach ($numbers as $number) { if (strpos((string) $number, $searchNumber) !== false) { $output[] = $number; } } // 75, 752, 755, 754 echo implode(", ", $output); If you are using PHP 8 you could even replace the strpos with str_contains function if (str_contains($number, $searchNumber)) { $output[] = $number; } RFC str_contains A: Try this: $result = []; $array = []; $toSearch = '75'; array_push($array,'1234',534,75,746); array_push($array,'164',574,752,755); array_push($array,'154',58,754,76); // If your array is one dimension $result = array_filter($array, function($el) use ($toSearch) { return strpos((string) $el, (string) $toSearch); }); // For 2D array: foreach ($array as $cur) { $result = array_merge($result, array_filter($cur, function($el) use ($toSearch) { return strpos((string) $el, (string) $toSearch); })); }
doc_4914
File src = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileHandler.copy(src,new File("*./Screenshots/facebook.png")); } A: This should work, make sure you have provided imports correctly also new file path should start from ./ which means paste in current directory import org.apache.commons.io.FileUtils; import org.openqa.selenium.OutputType; import org.openqa.selenium.TakesScreenshot; public void getscreenshot() throws IOException{ File src = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileUtils.copy(src,new File("./Screenshots/facebook.png")); }
doc_4915
Sample code: unit InsecureBrowser; interface uses Winapi.Windows, Winapi.Messages, Winapi.Urlmon, Winapi.WinInet, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics, Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.OleCtrls, Vcl.StdCtrls, SHDocVw_EWB, EwbCore, EmbeddedWB; type TInsecureBrowserForm = class(TForm, IHttpSecurity, IWindowForBindingUI) web: TEmbeddedWB; cmdGoInsecure: TButton; procedure webQueryService(Sender: TObject; const [Ref] rsid, iid: TGUID; var Obj: IInterface); procedure cmdGoInsecureClick(Sender: TObject); private { IWindowForBindingUI } function GetWindow(const guidReason: TGUID; out hwnd): HRESULT; stdcall; { IHttpSecurity } function OnSecurityProblem(dwProblem: Cardinal): HRESULT; stdcall; end; var InsecureBrowserForm: TInsecureBrowserForm; implementation {$R *.dfm} function TInsecureBrowserForm.GetWindow(const guidReason: TGUID; out hwnd): HRESULT; begin Result := S_FALSE; end; function TInsecureBrowserForm.OnSecurityProblem(dwProblem: Cardinal): HRESULT; begin if (dwProblem = ERROR_INTERNET_INVALID_CA) or (dwProblem = ERROR_INTERNET_SEC_CERT_CN_INVALID) then Result := S_OK else Result := E_ABORT; end; procedure TInsecureBrowserForm.webQueryService(Sender: TObject; const [Ref] rsid, iid: TGUID; var Obj: IInterface); begin if IsEqualGUID(IID_IWindowForBindingUI, iid) then Obj := Self as IWindowForBindingUI else if IsEqualGUID(IID_IHttpSecurity, iid) then Obj := Self as IHttpSecurity; end; procedure TInsecureBrowserForm.cmdGoInsecureClick(Sender: TObject); begin web.Navigate('https://evil.intranet.site'); end; end. A: It's not obvious, but it turns out you need to navigate to about:blank before using WebBrowser2, or certain things just don't happen, including some QueryService calls. Thanks to Igor Tandetnik for identifying this in 2010. So, just add: procedure TInsecureBrowserForm.FormCreate(Sender: TObject); begin web.Navigate('about:blank'); end; I also wrote this up on my blog: https://marc.durdin.net/2016/03/dont-forget-to-navigate-to-aboutblank-when-embedding-iwebbrowser2/
doc_4916
My Mesos is 1.6.0, thanks A: There are two places to look: * *Check a task status update message and a reason *Take a look at Mesos sandbox and examine stdout/stderr and other logs generated by your app. Here you have instruction how to do it. You may need to decipher the problem from exit code. Here is an expalantaion how to do it.
doc_4917
<!DOCTYPE html> <html lang="en"> <div class="container" id="content-area"> <div class="flex-row flex-baseline flex-space-between" data-id="1826" id="info"> <h1 class="no-margin">XYZ</h1> <div class="new-stack" id="sublists">Added</div> </div> </div> I am looking to pull the data-id attribute inside div tag. Here is what I am trying using CSS Selector >>> response.css("#content-area div")[0].css("::attr[data-id]").get() And I got below error cssselect.parser.SelectorSyntaxError: Got pseudo-element ::attr not at the end of a selector Here is how I solved it by combining CSS and XPATH Selectors. >>> response.css("#content-area div")[0].xpath("@data-id").get() '1826' Is there any solution which can do this using just CSS Selector? A: You need to use () instead of [] >>> response.css("#content-area div")[0].css("::attr(data-id)").get()
doc_4918
Options I am looking for SOAP client are : 1. JAX-WS 2. Spring MVC & Spring-WS 3. Apache Axis or CXF 4. Spring Integration or Camel. Can these be used for consumption of services too? Won't that be overhead? What do you suggest? Please recommend the best option if also not in the above list. Thanks in advance. A: Look at this excellent post about this subject. Which framework is better CXF or Spring-WS? My advise based on the fact that you only have to develop one client, is to consider making your choice based on your context to optimize your productivity and avoid adding tones of layer and libs in your app: * *Pure Java EE app or already using Spring APP *Your current Application Server : Jboss for example already provide a CXF implementation that is very suggested to use *Service providers "age": I have met some problems in calling AS400 or old IBM system webservices. Any client was not working. *Your IDE and Plugins : for example, if you have eclipse, Axis/CXF plugins are very interesting. Concerning Camel, it is interesting if you have different source and destination like HTTP to JMS. For Camel, read this post: What exactly is Apache Camel?
doc_4919
import java.util.Scanner; import java.util.Stack; public class ReverseStack { public static void main(String[] args) { String sentence; System.out.println("Enter a sentence: "); Scanner scan = new Scanner(System.in); sentence = scan.nextLine(); String k = PrintStack(sentence); } private static String PrintStack(String sentence) { String reverse; String stringReversed = ""; Stack<String> stack= new Stack<String>(); sentence.split(" "); for(int i=0;i<sentence.length(); i++) { stack.push(sentence.substring(i, i+1)); } while(!stack.isEmpty()) { stringReversed += stack.pop(); } System.out.println("Reverse is: " + stringReversed); return reverse; } } A: I will type an expatiation so you can still get the experience of writing the code, rather than me just giving you the code. First create a Stack of Characters. Then use add each character in the String to the Stack, starting with the first char, then the second, and so on. Now either clear the String or create a new String to store the reversed word. Finally, add each character from the Stack to the String. This will pull the last character off first, then the second to last, and so on. Note: I believe you have to use the Character wrapper class, rather than the primitive char; I may be incorrect about that though. If you aren't familiar with how Stacks work, here is a nice interactive tool to visualize it: http://www.cise.ufl.edu/~sahni/dsaaj/JavaVersions/Stacks/AbstractStack/AbstractStack.htm A: Change: Stack<String> stack = new Stack<String>(); to be Stack<Character> stack = new Stack<Character>(); and refactor your methods code as necessary; i.e. What is the easiest/best/most correct way to iterate through the characters of a string in Java? A: I did it with a different kind of stack, but I suspect this might help private static String reverseWord(String in) { if (in.length() < 2) { return in; } return reverseWord(in.substring(1)) + in.substring(0, 1); } private static String reverseSentence(String in) { StringBuilder sb = new StringBuilder(); StringTokenizer st = new StringTokenizer(in); while (st.hasMoreTokens()) { if (sb.length() > 0) sb.append(' '); sb.append(reverseWord(st.nextToken())); } return sb.toString(); } public static void main(String[] args) { String sentence = "Hi dog cat"; String expectedOutput = "iH god tac"; System.out.println(expectedOutput .equals(reverseSentence(sentence))); } Outputs true
doc_4920
A: Solved this using Node JS and HID library.
doc_4921
I have been trying to change permission settings using following code: private void changePermissionSettings(String resourceId) throws GeneralSecurityException, IOException, URISyntaxException { com.google.api.services.drive.Drive driveService = getDriveService(); JsonBatchCallback<com.google.api.services.drive.model.Permission> callback = new JsonBatchCallback<com.google.api.services.drive.model.Permission>() { @Override public void onFailure(GoogleJsonError e, HttpHeaders responseHeaders) throws IOException { Log.e("upload", "Permission Setting failed"); } @Override public void onSuccess(com.google.api.services.drive.model.Permission permission, HttpHeaders responseHeaders) throws IOException { Log.e("upload", "Permission Setting success"); } }; BatchRequest batchRequest = driveService.batch(); com.google.api.services.drive.model.Permission userPermission = new com.google.api.services.drive.model.Permission() .setType("user") .setRole("writer"); driveService.permissions().create(resourceId, userPermission) .setFields("id") .queue(batchRequest, callback); com.google.api.services.drive.model.Permission contactPermission = new com.google.api.services.drive.model.Permission() .setType("anyone") .setRole("reader"); driveService.permissions().create(resourceId, contactPermission) .setFields("id") .queue(batchRequest, callback); batchRequest.execute(); } private com.google.api.services.drive.Drive getDriveService() throws GeneralSecurityException, IOException, URISyntaxException { Collection<String> elenco = new ArrayList<>(); elenco.add("https://www.googleapis.com/auth/drive"); GoogleAccountCredential credential = GoogleAccountCredential.usingOAuth2(this, elenco) .setSelectedAccountName(getAccountName()); Log.e("upload", getAccountName()); return new com.google.api.services.drive.Drive.Builder( AndroidHttp.newCompatibleTransport(), new JacksonFactory(), credential).build(); } But this code is not working. What am I doing wrong??
doc_4922
Example Value: name-form-na-stage0:3278648990379886572,rules-na-unwanted-sdfle2:6886328308933282817,us-disdg-order-stage1:1273671130817907765 Desired Output: 3278648990379886572,6886328308933282817,1273671130817907765 The title does always start with a letter and the end with a colon so I can see how REGEXP_REPLACE might work to replace any string between starting with a letter and ending with a colon with '' might work but I am not good at REGEXP_REPLACE patterns. Chat GPT is down fml. Side note, if anyone knows of a good guide for understanding pattern notation for regular expressions it would be much appreciated! I tried this and it is not working REGEXP_REPLACE(REPLACE(REPLACE(codes,':', ' '), ',', ' ') ,' [^0-9]+ ', ' ') A: This solution assumes a few things: * *No colons anywhere else except immediately before the numbers *No number at the very start At a high level, this query finds how many colons there are, splits the entire string into that many parts, and then only keeps the number up to the comma immediately after the number, and then aggregates the numbers into a comma-delimited list. Assuming a table like this: create temp table tbl_string (id int, strval varchar(1000)); insert into tbl_string values (1, 'name-form-na-stage0:3278648990379886572,rules-na-unwanted-sdfle2:6886328308933282817,us-disdg-order-stage1:1273671130817907765'); with recursive cte_num_of_delims AS ( select max(regexp_count(strval, ':')) AS num_of_delims from tbl_string ), cte_nums(nums) AS ( select 1 as nums union all select nums + 1 from cte_nums where nums <= (select num_of_delims from cte_num_of_delims) ), cte_strings_nums_combined as ( select id, strval, nums as index from cte_nums cross join tbl_string ), prefinal as ( select *, split_part(strval, ':', index) as parsed_vals from cte_strings_nums_combined where parsed_vals != '' and index != 1 ), final as ( select *, case when charindex(',', parsed_vals) = 0 then parsed_vals else left(parsed_vals, charindex(',', parsed_vals) - 1) end as final_vals from prefinal ) select listagg(final_vals, ',') from final
doc_4923
Remote Server: Lets say remote1 I have the following shell script in local1 server; #!/bin/sh comm1=`arping -c 3 -s 192.168.xxx.xx 192.168.yyy.yy | grep "reply from 192.168.yyy.yy" | awk '{print $2,$3,$4}' | tail -1 | awk '{print $3}'` comm2="192.168.yyy.yy" if [[ "$comm1" == *"$comm2"* ]] then echo "IP is up and running fine" else echo "IP is not up and running" fi If I run the above script in local server, I am getting the desired output, which is "IP is up and running fine". But if I copy the same script to remote1 server and try to execute from remote1 server against local1 server, it is executing the script but giving the following output; bind: Cannot assign requested address IP is not up and running So, remote1 server is able to execute the script on local1 server, but it is skipping the true condition with error as I mentioned above. FYI, I ran the following command on remote1 server; ssh root@xxx.xxx.xxx.xx /root/script.sh Any idea where Iam going wrong or how to overcome this error? A: For the remote server you have to switch IPs arping -c 3 -s 192.168.yyy.yy 192.168.xxx.xx The server IP ends with yyy.yy but you are trying to use xxx.xx as the source IP for the command (-s) which is denied by the remote OS.
doc_4924
Uncaught TypeError: Cannot read property 'msie' of undefined when I try to console log a view from couchdb. I'm new to coding and I don't know what this error means. When I use an ajax call to basically do the same thing it works and pulls the view from my couch database. This is for school so I need to get the couch call working. thanks for any help. HTML: <html> <head> <title>SK8 TEAMS</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href="css/jquery.mobile.flatui.css"/> <script src="jquery-1.9.1.min.js"></script> <script src="jquery.mobile-1.3.2.min.js"></script> <link rel="stylesheet" href="style/main.css"/> </head> <body> <section data-role="page" id="home" data-theme="c"> <section data-role="header" data-position="fixed"> <h1>Home</h1> </section> <section data-role="content" class="ui-body-c"> <ul id="homeItems" data-role="listview"> </ul> </section> </section> <script src="jquery.couch.js"></script> <script src="my.js" type="text/javascript"></script> </body> JAVASCRIPT: $(document).on('pageinit', '#home', function () { $.couch.db("sk8team").view("app/company", { success: function (data) { console.log(data); } }); }); A: I also had similar error, but I solved it. It's a piece of code I got from Stack Overflow. Just paste it in your script. And download the latest version of jQuery. I used jquery-1.10.2.js. JavaScript: jQuery.browser = {}; (function () { jQuery.browser.msie = false; jQuery.browser.version = 0; if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) { jQuery.browser.msie = true; jQuery.browser.version = RegExp.$1; } })(); A: Did a little more research, the issues is couch is not compatible with JQuery 1.9, try an older version of JQuery or using the browser support plugin. A: $.browser removed in jQuery 1.9 you can use either navigator.userAgent or jQuery.migrate plugin. https://github.com/jquery/jquery-migrate/
doc_4925
In case there's any concern, each of these checks will be initiated by a real live consumer who is actually interested in whether or not something's available at that store. There will be no superfluous requests or other internet badness. I'm using Selenium's Grid framework so that I can run stuff in parallel and I'm keeping each of the controlled browsers open between requests. The issue I'm experiencing is that I need to perform these checks across a number of different domains, and I won't know in advance which one I will have to check next. I didn't think this would be too big an issue, but it turns out that when a Selenium browser instance gets made, it gets linked to a specific domain and I haven't been able to find any way to change what domain that is. This requires restarting a browser each time a request comes in for a domain we're not already linked to. Oh, and the reason we're using Selenium instead something more light-weight (eg. Mechanize) is because we need something that can handle JavaScript. Any help on this would be greatly appreciated. Thanks in advance. A: I suppose you are restricted from changing domain because of same origin policy. Did you try using browser with elevated security privileges like iehta for internet explorer and chrome for firefox browsers. While using these modes of browsers, use open method in your tests and pass the URL which you want to open. This might solve your problem.
doc_4926
SELECT device.mac, reseller.name, agent.name FROM device LEFT JOIN global_user ON device.global_user_id = global_user.id LEFT JOIN agent ON global_user.id = agent.global_user_id LEFT JOIN reseller ON global_user.id = reseller.global_user_id OR agent.reseller_id = reseller.id WHERE device.global_user_id IN ( SELECT global_user_id FROM reseller WHERE id = '200' ) OR device.global_user_id IN ( SELECT global_user_id FROM agent WHERE reseller_id = '200' ); im trying to get a list of all of the devices, with some reseller/agent details, under a particular reseller. this would include devices assigned directly to the reseller and devices assigned to agents under the reseller. reseller.id is unique. it will be executed on postgresql database. devices are assigned to both agents and resellers. agents are assigned to resellers. this query works, but i haven't often used an OR in a JOIN and i usually try to avoid subqueries. this query concept will be used often, so i'd like to make sure i haven't overlooked something. thank you for any feedback. A: You could give this a whirl: SELECT device.mac, reseller.name, agent.name FROM device JOIN ( SELECT global_user_id FROM reseller WHERE id = '200' UNION SELECT global_user_id FROM agent WHERE reseller_id = '200' ) r ON device.global_user_id = r.global_user_id LEFT JOIN global_user ON device.global_user_id = global_user.id LEFT JOIN agent ON global_user.id = agent.global_user_id LEFT JOIN reseller ON global_user.id = reseller.global_user_id OR agent.reseller_id = reseller.id Clarification: Always a good idea to try out different variations of a query to make sure you end up with the best performing query (although often, different variations result in the same execution plan being generated by the query optimiser). Speaking from SQL Server point of view, the order in which the query is processed in means the JOINs are processed first before the WHERE clause. So in theory, this JOIN approach should scale the resultset down earlier. A: How about this? SELECT d.mac, r.name, a.name FROM device as d, global_user as g, agent as a, reseller as r WHERE d.global_user_id = g.id AND g.id = a.global_user_id AND (g.id = r.global_user_id OR a.reseller_id = r.id) AND (r.id = '200' OR a.reseller_id = '200'); A: I try to avoid sub-queries and IN clause if it is easy to replace them. If I understood you DB model correctly, this query should produce the same result: SELECT DISTINCT device.mac, reseller.name, agent.name FROM device LEFT JOIN global_user ON device.global_user_id = global_user.id LEFT JOIN agent ON global_user.id = agent.global_user_id LEFT JOIN reseller ON global_user.id = reseller.global_user_id OR agent.reseller_id = reseller.id WHERE reseller.id = '200' OR agent.reseller_id = '200'
doc_4927
protected void done() { try { get(); } catch (InterruptedException | ExecutionException e) { // do something with any exceptions here } } A: There is nothing wrong with you syntax, this should work fine. This is an old error, that does not matter with java 7 or later versions. Im guessing you are using DR JAVA on mac? JAVA DR on mac is not compatible with Java 7 or 8, thats why you are getting that error.
doc_4928
I've been doing nothing but Rails projects for the last year. Now, I have a client wants their application converted from ASP.NET Web Forms to ASP.NET MVC. This is the first time I've done MVC in C# so I'm trying to see how different things are and if certain productive Rails tasks map over to ASP.NET MVC. First of all, is there such a thing as a Scaffold in ASP.NET MVC? I see something called an Area but I don't know if thats quite what I want. Also, how can I generate a scaffold (models, controllers and views), just a controller or just a model based on the same information I would give a Rails app? For example I might do something like: $>script/generate scaffold person first_name:string last_name:string which produces a Person model, a migration script (what I run to build the database table), a People controller and views for each of the RESTful interfaces (index, new, create, edit, update, show, destroy). Can I do something like this in Visual Web Develop 2010 Express? A: There is MVC Scaffolding with MVC3. Here's a nice post on it. A: Whereas Rails has tries (especially for beginners) to guide you into one way to write your app, MVC attempts to be all things to all people. So it's very flexible, but it's hard to specify the "one true way" to scaffold something. So one way which works is: * *Create your DB. Create an Entity Framework model in the usual way from the DB. *Compile. *Right-click Controllers, Add, Controller. Check the box for actions. *Right click one of the generated actions, choose Add View. *Check the box for "Create strongly typed view" and select scaffolding from the combo box. But there are many other ways! There are 3rd party tools for migrations, but nothing built in. What is built into full VS (maybe not express) is database comparison and merge script generation, an arguably more powerful, but perhaps harder for new developers to understand, alternative.
doc_4929
A: Having the concept of Under Storage and keeping the data and metadata in sync between Alluxio and Under Storage is the key difference between Alluxio and HDFS. Besides, there are still a few other difference as the consequences that Alluxio is designed to host hot data and implements the semantics of a distributed cache whereas HDFS is designed to be a persistent storage service. * *Alluxio provides with configurable eviction policies. *Alluxio natively supports operations like setting TTLs (see link). *The number of block copies of data in HDFS is a fixed constant for persistency (3 by default, one can use setrep command to change the replication level in HDFS). However, the number of block replicas in Alluxio can be changed automatically based on the popularity of different blocks. If a block is accessed by multiple different applications on different servers, there can be more copies. *Alluxio supports tiered storage, so one can configure multiple tiers with MEM, SSD and HDD (see link).
doc_4930
I'm beginning with angularjs, and I'm struggling into two points. 1st I've to create a connection with my mysql, witch i wasn't able until now.. 2nd I've to display the content into the HTML page. I'm using the following code, witch includes the app.js, page.html and data.json (I'll change that later to php if I'm allowed to.) The app.js seems to work fine, but the view (page.html) isn't display any data.. App.js app.controller('PatientListCtrl', ['$scope', '$http', function ($scope, $http) { $http.get('patients.json').success(function (data) { $scope.patients = data; }); $scope.orderProp = 'id'; }]); patients.json [ { "id": 0, "first_name": "John", "last_name": "Abruzzi" }, { "id": 1, "first_name": "Peter", "last_name": "Burk" } ] Page.html <!DOCTYPE html> <html class="no-js" ng-app="app"> <head> <meta charset="utf-8" /> <title>AngularJS Plunker</title> </head> <body> <div data-ng-repeat="patient in patients"> {{patient.id}}{{patient.last_name}}{{patient.first_name}}{{patient.SSN}}{{patient.DOB}} <div> <script src="http://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.min.js"></script> <script src="app.js"></script> </body> </html> Thanks for your attention. A: You have not define the controller in the view <div ng-controller="PatientListCtrl" data-ng-repeat="patient in patients"> <li> {{patient.id}} </li> <div> Here is the working Fiddle
doc_4931
1) There are the exposed modules of the library 2) There are internal build-dependencies that should not be exported as part of the library 3) There are external build-dependencies. There is a bit of overlap in the Cabal file. For the library I write: exposed-modules: The List of Exposed Modules other-modules: The List of other modules build-depends: The List of build dependencies Then for the executable other-modules: The list of exposed modules and other modules are needed in the executable build-depends: The list of build dependencies What would be nice is if Cabal lets me have a variable. V1 = List exposed modules V2 = List other modules V3 = List build dependencies Then in the executable, for example, I could do other-modules: V1,V2 build-depends: V3 Alternatively, I would take a recommendation for a better way to use the Cabal system! A: No, this is not possible yet. I think we have a feature request for something like this on the issue tracker somewhere. Note, however, that your executable can depend on the library defined in the same .cabal file, so you don't have to share exposed-modules and other-modules: Name: some-package Version: 0.1 [...] Library build-depends: some-dependency >= 1.0, ... exposed-modules: A, B, C other-modules: C, D, E [...] Executable some-exe main-is: SomeExe.hs build-depends: some-package == 0.1 For a real-world example, see here.
doc_4932
I am using RTD Theme, and building to html. The problems I have at the moment are: * *custom types annotations created using typing module are being printed in their full form, but I would like just their name to be printed (ex. CustomType = Union[type1, type2, type3]should be rendered as simply CustomType, but gets rendered as Union[type1, type2, type3] instead) *method signature is printed on a single line, I would like it to be printed as an indented custom form *method signature should highlight syntax somehow like in an IDE I am not sure how to achieve these customisations, to me it seems there isn't any option in the html theme conf to do so. The first thing I tried was something like this code, but I kinda got stucked. In a first moment I also thought about doing a fork of sphinx (like this user did to try to solve another annoying problem [PR]), but then I realized that something like this is really complex if you don't know the project very well... I will provide further details if necessary. A: This doesn't fully satisfies what I was looking for, but at least from the release 2.1 they added the possibility to exclude type hints from the method signature. You can enable this feature by setting in the docs conf.py file, the following line: autodoc_typehints = "none" I would also suggest to then set this variable to specify a minimal Sphinx version: needs_sphinx = "2.1"
doc_4933
but when I click the notifications tab, it appears behind the iframe, as you can see in the image below. Does anyone have an idea what the problem might be? http://i.stack.imgur.com/0lBjC.png A: I've found the solution to the problem. In the html-template I added an attribute atributes.wmode = "opaque";
doc_4934
Here is the Constructor that has all the fields from my Flight class: public Flight(String name, int num, int miles, String origin, String destination) { Airlinename = name; flightnumber = num; numofmiles = miles; Origincity = origin; Destinationcity = destination; } And the part of my program where I created the object and try to read the data from the file. I had created a blank constructor in my class too because I wasn't sure if I was supposed to put anything in the object when I created it. Flight myFlight = new Flight(); File myFile = new File("input.txt"); Scanner inputFile = new Scanner(myFile); while (inputFile.hasNext()) { myFlight = inputFile.nextLine(); } inputFile.close(); } } A: Just in case you use special characters, you need to modify your program so you can read them correctly. Scanner inputFile = new Scanner(myFile, "UTF-8"); On the other hand, if the text file contains the following, possibly subsequent calls to nextInt() generate a runtime exception. Gran España 1 1001 New York Los Angeles If that were the case, the reading should be different. myFlight = new Flight(inputFile.nextLine(), Integer.parseInt(inputFile.nextLine()), Integer.parseInt(inputFile.nextLine()), inputFile.nextLine(), inputFile.nextLine()); As with any program, when adding more conditions to improve the model, it needs more and more code. Good luck. A: try myFlight = new Flight(inputFile.next(), inputFile.nextInt(), inputFile.nextInt(), inputFile.next(), inputFile.next()); A: You can't directly assign a string line to an object, you need to parse the line into parts and assign values to variables one by one. A: How is your input text organized? The Scanner's nextLine() method is returning a String object and definitely you cannot assign that to a type Flight. There are different methods to get the different values in the Scanner like nextInt(), nextFloat(). Basically you can try like this Flight myFlight = new Flight(); myFlight.setFlightName(inputFile.next()); myflight.setMiles(inputFile.nextInt(); etc. This is just a sample, you need to check the format you have in the input.txt file.
doc_4935
CSS #nav{ position:fixed; margin:0px; display:block; } #canvas{ text-align:center; background-color:transparent; border: 2px solid black; display:block; } HTML <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <link href="IV.css" rel="stylesheet" type="text/css"> </head> <body> <nav> <div id = "toolbar_effects"> <input type="image" src="Icons/imageviewer.svg" width="50" height="50" id="Imageviewer2"class = "s"> <input type="image" src="Icons/effect-grayscale.svg" width="50" height="50" id="grayscale"class = "s"> </div> </nav> <center><canvas id = "canvas" width="600" height="400"><center> </canvas> </body> </html> A: Use z index in css on your navigation https://css-tricks.com/almanac/properties/z/z-index/ Without that by default elements that are later in code are higher in structure. A: Set padding-top to the body. And don't use center tags it is deprecated. It works http://jsfiddle.net/soov7k2k/. You probably forgot to position your navbar properly! position: fixed is half of the work. You need to add top:0 and left:0 to your navbar otherwhise it won't work. See the fiddle. body { padding-top: 100px;//Or whatever the height of your navbar is. }
doc_4936
I'm trying to implement simple CRUD actions on a table and was injecting a default Doctrine repository into my controller (without injecting an entity). For the "Update" action I would first ->find($id) for the record to update and it would return an instance of the entity for me to bind to my form object. For the "Create" action I realized I can't ->find($id) a record to insert (since it doesn't exist) in order to retrieve an instance of the entity for me to bind to my form object. Is there is an alternate way to insert data using Doctrine without an instance of an entity? Or is there a way to retrieve an instance of the entity from the repository so I can ->bind() it to the form? If the answer to both are no, then I imagine my only options are to inject an instance of the entity to my controller, or to use a custom repository which contains a method which would return an entity to use in the ->bind() for insertion. My guess would be to define a custom repository which has a method which retrieves an empty entity instance for use in insertion. Is this assumption correct? A: As pointed by @Crisp in comments, Entities are no more than PHP classes, same for Repositories. The two are differentiated by their respective role. You'll never implicitly create a new instance of a Repository because doctrine do it for you across DependencyInjection principles (Service, Factory, ...). To create a new database entry, you must create a new instance of the corresponding entity, then store it using EntityManager::persist and EntityManager::flush methods. Reuse the same instance of an entity would not give you any benefit, nor make any difference in your project's maintainability. The entity class itself will never be broken/changed, only instances of them are created, renamed, moved, deleted. These instances represents your database entries, this is the primary interest of use an ORM.
doc_4937
The problem that i am facing is: I have downloaded node.js and run the command:npm install socket.io, I get this message:image (sorry for the link to the drive but i don't yet have the required rep to post mages) After that i have run the command node server.js server.js: // Generated by CoffeeScript 1.9.1 (function() { var io; io = require('socket.io').listen(4000); io.sockets.on('connection', function(socket) {}); }).call(this); which is running in cmd prompt but taking forever. Any help would be really appreciated. A: It should run forever. It's how node.js works. This node process is your server. Your client application will connect to it via your port 4000. Press Ctrl+C to stop it.
doc_4938
'HGHGSD_JHJSD_HGSDHGJD_GFSDGFSHDGF_GFSD' or 'SJDGh-SUDYSUI-jhsdhsj-YTsagh-ytetyyuwte-sagd' or 'hwerweyri~sdjhfkjhsdkjfhds~jsdfhjsdhf~mdnfsd,mfn' Based on a formula, a sub string is always returned after the special character. But this string may be after the first, second or third place of the special character - or _ or ~. I used Charindex and Substring function in SQL server. But always only the first part of the character string after the selected character is returned. for example: select SUBSTRING ('hwerweyri~sdjhfkjhsdkjfhds~jsdfhjsdhf~mdnfsd,mfn', 0, CHARINDEX('~', 'hwerweyri~sdjhfkjhsdkjfhds~jsdfhjsdhf~mdnfsd,mfn', 0)) returned value: hwerweyri If there is a solution for this purpose or you have a piece of code that can work in solving this problem, please advise. It is important to mention that the location of the special character must be entered by ourselves in the function, for example, after the third repetition or the second repetition or the tenth repetition. The method or code should be such that the location can be entered dynamically and the function does not need to be defined statically. For Example: 'HGHGSD_JHJSD_HGSDHGJD_GFSDGFSHDGF_GFSD' ==> 3rd substring ==> 'GFSDGFSHDGF' 'HGHGSD_JHJSD_HGSDHGJD_GFSDGFSHDGF_GFSD' ==> second substring ==> 'HGSDHGJD' 'HGHGSD_JHJSD_HGSDHGJD_GFSDGFSHDGF_GFSD' ==> 1st substring ==> 'JHJSD' And The formula will be sent to the function through a programmed form and the generated numbers will be numbers between 1 and 15. These numbers are actually the production efficiency of a product whose form is designed in C# programming language. These numbers sent to the function are variable and each time these numbers may be sent to the function and applied to the desired character string. The output should look something like the one above. I don't know if I managed to get my point across or if I managed to make my request correctly or not. A: Try the following function: CREATE FUNCTION [dbo].[SplitWithCte] ( @String NVARCHAR(4000), @Delimiter NCHAR(1), @PlaceOfDelimiter int ) RETURNS Table AS RETURN ( WITH SplitedStrings(Ends,Endsp) AS ( SELECT 0 AS Ends, CHARINDEX(@Delimiter,@String) AS Endsp UNION ALL SELECT Endsp+1, CHARINDEX(@Delimiter,@String,Endsp+1) FROM SplitedStrings WHERE Endsp > 0 ) SELECT f.DataStr FROM ( SELECT 'RowId' = ROW_NUMBER() OVER (ORDER BY (SELECT 1)), 'DataStr' = SUBSTRING(@String,Ends,COALESCE(NULLIF(Endsp,0),LEN(@String)+1)-Ends) FROM SplitedStrings ) f WHERE f.RowId = @PlaceOfDelimiter + 1 ) How to use: select * from [dbo].[SplitWithCte](N'HGHGSD_JHJSD_HGSDHGJD_GFSDGFSHDGF_GFSD', N'_', 3) or select DataStr from [dbo].[SplitWithCte](N'HGHGSD_JHJSD_HGSDHGJD_GFSDGFSHDGF_GFSD', N'_', 3) Result: GFSDGFSHDGF
doc_4939
Opening non standard ports in Corporate environments is HELL. How do we offer a web application on port 80, and also stream Server-Sent Events to the same port? My current solution is an in-house web application that uses Server-Sent Events on port 9000 to push dashboard updates. Edit: More Details In my solution the real-time data is not being severed from IIS. I have a console app that receives and processes external real-time events then pushes those to a url on port 9000. The console Application uses a httpListener to server the server-sent events. The IIS application points to the event-source from that console app to display live statistics to the web applications users. This requires IT Security to allow traffic on the non-standard port 9000 for the set of users who need to access the web application. Please what alternatives would you suggest? A: Absolutely. Server-sent Events are just standard HTTP, so you'll be on port 80 by default. What makes you think that you need to serve them on a nonstandard port?
doc_4940
I don't know how to separate deserialisation and insertion from my Index() method, so every time I refresh the page it deserializes and inserts data again. Controller: public class HomeController : Controller { private DBContext db = new DBContext(); public string data = "...."; //here is the json data that i get from API public ActionResult Index() { RootObj myData = JsonConvert.DeserializeObject<RootObj>(data); foreach (var item in myData) { MyModel myItem = new MyModel { name = item.name, symbol = item.symbol, price = item.price, }; db.MyItemsDB.Add(myItem); db.SaveChanges(); } return View(db.MyItemDB.ToList()); } protected override void Dispose(bool disposing) { if (disposing) { db.Dispose(); } base.Dispose(disposing); } } How can I separate deserialisation and insertion from Index() method? Thanks for help! A: I didn't think about that my json is changing, but it does and because of that I need to do the insertion into database with every page refresh. The solution is doing unique field in the database, so there will be no repeated data. UPD: The field "name" of myItem must be unique, but I need to update data in other fields if the item with this name already exists, so I did this: db.MyItemsDB.AddOrUpdate(m => m.name, myItem); db.SaveChanges(); A: Assuming you want to do a database insertion per unique user, you could use a cookie. If the cookie doesn't exist, the user didn't visit the site yet (or has deleted his cookies) If the cookie exists, the user already visited and you don't do the database insertion. It doesn't remove all the insertions, but it reduces them. If(HttpContext.Current.Response.Cookies.AllKeys.Contains("myCookie") == false) db.SaveChanges(); else { HttpCookie myCookie = new HttpCookie("myCookie"); myCookie.Value = "SomeInfo"; myCookie.Expires = DateTime.Now.AddDays(1d); Response.Cookies.Add(myCookie); }
doc_4941
I am able to copy it and paste it, it's just super incorrectly formatted. please help. a lot of stuff before this.... lastrow = ws.Cells(ws.rows.Count, "A").End(xlUp).Row ' lastrow = ws.Range("A1").End(xlUp).Row i = i + 1 For i = 3 To lastrow Set svalue1 = .getElementbyID("provideANumber") svalue1.Value = ws.Cells(i, 1).Value For Each eInput In .getElementsbyTagName("input") If eInput.getAttribute("value") = "ENTERit" Then eInput.Click Exit For End If Next IE.Visible = True Exit For Next i more stuff in between..... 'Copy and Paste Results into Excel Sheets("Sheet4").Select Range("A1:Z50") = "" Range("A1:Z150").Select Application.Wait DateAdd("s", 10, Now) IE.ExecWB 17, 0 '//select all IE.ExecWB 12, 2 '//Copy Selection ActiveSheet.PasteSpecial Format:="Text", link:=False, displayasicon:=False Range("A1:Z100").Select I expect it to show up similiarly to how it looks on the website, however, it shows up all together in one column (and not even legible) A: Sheets("Sheet4").Select Range("A1:Z100") = "" Range("A1:Z100").Select Selection.ClearContents Application.Wait DateAdd("s", 2, Now) '//Loads IE.ExecWB 17, 0 '//select all from webpage IE.ExecWB 12, 2 '//Copy Selection Application.DisplayAlerts = False '//Doesnt display alerts ActiveSheet.Paste Sheets("Sheet4").Select '//Selects sheet 4 again Range("A3:Q32").Select Selection.Copy 'Creates a new sheet after & pastes content into it, formats Sheets.Add After:=ActiveSheet ActiveSheet.Paste Selection.Columns.AutoFit Selection.rows.AutoFit This code allowed me to copy and paste the data from the web page the way it's formatted on the web page
doc_4942
Select Tbl.Fromdate, Tbl.Por, Tbl.Porname, Tbl.Bmref3 From( Select To_Char(P.Fromdate, 'dd-mm-yyyy') As Fromdate, P.Por, P.Porname, W.Bmref3, , RANK() OVER (PARTITION BY P.Por ORDER BY P.fromdate DESC) AS rank From Tmsdat.Climandatecomps W Inner Join Tmsdat.Portfolios P On (W.Porik = P.Porik) Where 1=1 ) Tbl Where 1=1 And Tbl.Rank = 1 ; However, I wish to select only the observations that have a Fromdate more recent than the June 30, 2021. I tried to add Tbl.Fromdate> '30-06-2021' to the WHERE clause, but I did not receive the desired results. Do you have any suggestions? Thank you in advance. Best regards, A: You would put the condition in the inner query: Select To_Char(P.Fromdate, 'dd-mm-yyyy') As Fromdate, P.Por, P.Porname, W.Bmref3, RANK() OVER (PARTITION BY P.Por ORDER BY P.fromdate DESC) AS rank From Tmsdat.Climandatecomps W inner join Tmsdat.Portfolios P On (W.Porik = P.Porik) Where p.FromDate > date '2021-06-30'
doc_4943
The operation method is described in the official document, but it needs to click several times with the mouse, which is very inconvenient. I'm learning C + +, often writing examples. I want to have a faster method. Tried find in Options > Keyboard but could not find action for this. the keyboard of New Project is Ctrl+Shift+N However, it cannot be added to the current solution by default, and the settings need to be modified manually. A: The name of the shortcut key isFile.AddNewProject,not File.NewProject.
doc_4944
package web.com; @WebServlet(name="AndroidResponse", urlPatterns={"/androidres.do"}) public class AndroidResponse extends HttpServlet { @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out=response.getWriter(); String un,pw; un=request.getParameter("username"); pw=request.getParameter("password"); if(un.equalsIgnoreCase("prashant") && pw.equals("sharma")) out.print(1); else out.print(0); } } A: You have to do it like this. The method request.getParameter("username") gives u a object, u have to downcast it string. hope this will work for you. String username = (String)request.getParameter("username"); String password = (String)request.getParameter("password");
doc_4945
Consider loosening your query with OR. bike OR shed will often show more results than bike shed. What is meant by bike and shed? Are these some symbols on the keyboard? A: What is meant by this is that searching for the string "bike shed" will yield less results than searching for entries that contains either "bike" OR "shed". Suppose you have three rows: 1: I like my bike 2: Buy my shed 3: Where is the bike shed Searching for "bike" yields rows 1 and 3. Searching for "shed" yields rows 2 and 3. Searching for "bike shed" yields row 3. Searching for "bike" OR "shed" yields rows 1, 2 and 3.
doc_4946
SELECT * FROM table_name this returns this info id | c_id | data_id | name | value 159 | 6 | 15 | salutation | MR 160 | 6 | 15 | full-name | blah 161 | 6 | 15 | phone-number | 123456789 162 | 6 | 15 | email | blah@blah.com 171 | 6 | 16 | salutation | MRS 172 | 6 | 16 | full-name | blah 173 | 6 | 16 | phone-number | 9876543210 174 | 6 | 16 | email | blah2@blah.com What i actually want to return is a table of data that looks like this data_id | salutation | full-name | phone-number | email 15 | MR | blah | 123456789 | blah@blah.com 16 | MRS | blah | 9876543210 | blah2@blah.com Is there a query that can do this or is it better to reformat the initial SQL result with php into the structure i need it? Disclaimer I am a noob when it comes to SQL and have only ever written the most basic of queries.
doc_4947
Violation of PRIMARY KEY constraint 'PK_table1'. Cannot insert duplicate key in object 'table1'. The statement has been terminated. In this case the primary key is an IDENTITY column. I do not include that column in my INSERT statements. I ran DBCC CHECKIDENT (table1,noreseed) The current identity value and the current column value are NOT the same. If I run the same command in 5 min, they become the same. I cannot figure out what the problem is. Any help is greatly appreciated. A: If the destination table is not empty then you want to reseed the identity column to the next highest existing value like so: Declare @Max bigint Set @Max = ( Select Max(IdCol) From TableA ) + 1 DBCC CHECKIDENT( TableA, RESEED, @Max ) A: You can use bcp command for this work. You can specify that identity being checked or not.
doc_4948
<system.net> <defaultProxy> <proxy usesystemdefault="False" proxyaddress="http://localhost" bypassonlocal="True" /> <bypasslist> <add address="[a-z]+\.flickr\.com\.+" /> </bypasslist> </defaultProxy> </system.net> that returns:System.Net.WebException: The remote server returned an error: (404) Not Found. what went wrong? thanks A: There are two possible scenarios here: 1: If you are building a client app (e.g. Console or WinForms) and want to access http://localhost using WebClient or HttpWebRequest without any intervening proxies, then bypassonlocal="True" should accomplish this. In other words, your app.config should look like this: <defaultProxy> <proxy usesystemdefault="False" bypassonlocal="True" /> </defaultProxy> </system.net> 2: if, however, you're trying to get your ASP.NET app (running on http://localhost) to be able to correctly resolve URIs either with a proxy or without one, then you'll need to set up proxy info correctly in your web.config (or in machine.config so you won't have to change your app's web.config), so ASP.NET will know that you are running a proxy or not running one. Like this: Home: <defaultProxy> <proxy usesystemdefault="False" bypassonlocal="True" /> </defaultProxy> </system.net> Work: <defaultProxy> <proxy usesystemdefault="False" proxyaddress="http://yourproxyserver:8080" bypassonlocal="True" /> </defaultProxy> </system.net> It's also possible to use proxy auto-detection, to pick up settings from the registry, etc. but I've always shied away from those approaches for servers apps... too fragile. BTW, if you find that things are configured correctly, and you still get the error, the first thing I'd recommend is to code up a quick test which manually sets the proxy before your WebClient/HttpWebRequest call, instead of relying on configuration to do it. Like this: WebProxy proxyObject = new WebProxy("http://proxyserver:80/",true); WebClient wc = new WebClient(); wc.Proxy = proxyObject; string s = wc.DownloadString ("http://www.google.com"); If the requests don't go through your work proxy correctly even when you're using code, even if the proxy is correctly configured in your code, then the proxy itself may be the problem. A: In WebClient to download data from local no issues but downloading from internet is problem so configure the following In your Web.config add below lines and replace your Intenet proxy Address and port <system.net> <defaultProxy useDefaultCredentials="true" enabled="true"> <proxy usesystemdefault="False" proxyaddress="http://your proxy address:port" bypassonlocal="True" /> </defaultProxy> <settings> <servicePointManager expect100Continue="false" /> </settings> </system.net> now your program logic work for downloading content from internet and public URLS.
doc_4949
script -f MININET_NODE_TTY after xterm h1 won't work as well because the output of ip -ais the same from the original bash session, so i can't properly ping nodes. A: Use sudo mnexec -a [PID] bash for each host PID in the dump mininet CLI command.
doc_4950
But how do I insert sCode into Table1. I'm new to MS Access programming. Private Sub Command120_Click() Dim sCode As String Dim i As Long For i = 1 To Me.Qty sCode = Format(Now(), "YYMMDDHHNNSS") & Format(i, "0000") Next i End Sub A: At least two ways - in both I'll assume the field itself is called sCode... 1) Use DAO: Private Sub Command120_Click() Dim RS AS DAO.Recordset, sCode As String, i As Long Set RS = CurrentDb.OpenRecordset("Table1") For i = 1 To Me.Qty sCode = Format(Now(), "YYMMDDHHNNSS") & Format(i, "0000") RS.AddNew RS!sCode = sCode RS.Update Next i End Sub 2) Use an SQL statement: Private Sub Command120_Click() Dim DB AS DAO.Database, sCode As String, i As Long Set DB = CurrentDb For i = 1 To Me.Qty sCode = Format(Now(), "YYMMDDHHNNSS") & Format(i, "0000") DB.Execute("INSERT INTO Table1 (sCode) VALUES ('" + sCode + "')"); Next i End Sub You may also want to wrap things up in a transaction if you want to be sure none rather than some of the updates will go through when there is an error.
doc_4951
As shown in the screenshot To change button backColour using code you do this: button_1.BackColor = Color.Red; Using this code I can only select any colour from the Web list but I can not find a colour which is listed in the System section. For Example I can not find the "InactiveCaptionText" colour programmatically. Anyone knows how to change button backColour to a colour from the System list using code? A: Try to use the System.Drawing.SystemColors, you might find there the InactiveCaptionText you're searching for. A: You are looking for SystemColors.InactiveCaptionText Property which returns a Color structure that is the color of the text in an inactive window's title bar.and use as below: using System.Drawing; //Method call button_1.BackColor = SystemColors.InactiveCaptionText; A: Why not to use RGB color...or hex function and create your own color. private Color RgbExample() { // Create a green color using the FromRgb static method. Color myRgbColor = new Color(); myRgbColor = Color.FromRgb(0, 255, 0); return myRgbColor; } To get all colors List<System.Windows.Media.Color> listOfMediaColours = new List<System.Windows.Media.Color>(); foreach(KnownColor color in Enum.GetValues(typeof(KnownColor))) { System.Drawing.Color col = System.Drawing.Color.FromKnownColor(color); listOfMediaColours.Add(System.Windows.Media.Color.FromArgb(col.A, col.R, col.G, col.B)); } Hex way to get color using System.Windows.Media; Color color = (Color)ColorConverter.ConvertFromString("#FFDFD991");
doc_4952
async function downloadFile(url: string, path: string) { const writer = fs.createWriteStream(path), response = await axios({ url, method: "GET", responseType: "stream" }); response.data.pipe(writer); return new Promise((resolve, reject) => { writer.on("finish", resolve); writer.on("error", reject); }); } other times it downloads but it has 0 byte files... Also right now I hard code it before request the file type, but how can I know the file type so I can use the correct one with saving the file. Right now I just use: await downloadFile(req.body.url, join("../tmp/", uuid, "/input.mp4")); A: const http = require('http'); const fs = require('fs'); const file = fs.createWriteStream("image.jpg"); const request = http.get("http://i3.ytimg.com/vi/J---aiyznGQ/mqdefault.jpg", function(response) { response.pipe(file); }); follow Same approach for all type of files
doc_4953
c:\WorkSpace\dbscripts>bash init_db.sh c:\WorkSpace\dbscripts> bash is present on container as Git is installed. c:\WorkSpace\dbscripts>where bash c:\git\bin\bash.exe Is there any way by which i can run .sh files inside windows container ? C:\Users\Jayesh>docker version Client: Version: 18.03.1-ce API version: 1.37 Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:12:48 2018 OS/Arch: windows/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 18.03.1-ce API version: 1.37 (minimum version 1.24) Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:21:42 2018 OS/Arch: windows/amd64 Experimental: true
doc_4954
These Kafka values also consist RecordTime field and other fields inside json object. This streaming job upserts a Kudu table according to the Id field. After a while we noticed that, some updates are really not reflecting the latest state of the values for some id fields. We assume 4 different executor processing per partition and when one of them finishes earlier than other it updates target Kudu table. so if we have values like below: (Id=1, val=A, RecordTime: 10:00:05 ) partition1 (Id=2, val=A, RecordTime: 10:00:04 ) partition1 (Id=1, val=B, RecordTime: 10:00:07 ) partition2 (Id=1, val=C, RecordTime: 10:00:06 ) partition3 (Id=2, val=D, RecordTime: 10:00:05 ) partition1 (Id=2, val=C, RecordTime: 10:00:06 ) partition4 (Id=1, val=E, RecordTime: 10:00:03 ) partition4 then Kudu table should be like this : Id Value RecordTime 1 B 10:00:07 2 C 10:00:06 But, sometimes we saw the Kudu table like this : Id Value RecordTime 1 A 10:00:05 2 C 10:00:06 trigger interval is 1-minute. So, how can we achieve the ordered update of target Kudu table. * *Should we use single partition for ordering but if we do this pros/cons? *For spark streaming how we can pick the latest record and values at per trigger-interval *Upsert kudu table according to both id and RecordTime but how? *Is there any other approach we can think about? Hope i could explain my problem enough. Briefly, how we can achieve event ordering for per micro-batch interval at spark streaming? Special thanks to anyone who can help me. A: As you are sourcing the data from Kafka, it is useful to recall that Kafka provides only ordering guarantees within a topic partition. Therefore, you can solve your issue if you have your Kafka producer produce all the messages for the same ID into the same partition. This can either be achieved by a custom paritioner in your KafkaProducer, or if you simply use the value of id as the "key" part of the Kafka message. If you do not have control over the Kafka producer you will need to make your Spark Streaming job stateful. Here, the challenging part is to define a time frame how long your job should wait for other messages with the same id to arrive. Is it just a few seconds? Maybe a few hours? I have made the experience that this can be difficult to answer, and sometimes the answer is "a few hours" which means you need to keep the state for a few hours which could make your job go OutOfMemory.
doc_4955
list_1 = [1, 2, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 1, 2, 1, 2, 3, 1, 1, 2, 3, 4, 1] In this list, the value right after 4 is either greater or smaller than 4 itself. More specifically, the smaller value is always 1. The values of the interval from that 1 to the next 1 are always less than 4. How can I find and replace the value of such an interval, for example as the following: list_2 = [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0, 0, 0, nan, nan, nan, nan, nan, nan, nan, 0] 1, 2, 1 get replaced because the first 1 is right after and smaller than 4. So the interval from that first 1 to the second 1 gets replaced. Other values don't have to be nan, I just highlight the replacements. To be clear, if the value right after is greater than 4 we will skip it. A: As an option: for i in range(len(list_1)): # looking at all the elements in the list in order. if a[i]==4: # if the number with "i" index equals to "4" if a[i+1] > 4: # if the number after 4 is greater than 4 a[i+1] == nan # the number after 4 equals "nan" A: list1 = [1, 2, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 1, 2, 1, 2, 3, 1, 1, 2, 3, 4, 1] list2 = [] i = 0 while i != len(list1): if list1[i] == 4: list2.append("nan") if list1[i+1] > 4: list2.append("Superior to 4 after a 4") i+=1 else: list2.append("nan") i+=1 else: list2.append("nan") i += 1 print(list2) The code isn't optimized, but will put "nan" if the corresponding value isn't superior to 4 after a 4, and will put "superior to 4 after a 4" if it is. A: Here's my try to let numpy do most of the heavy lifting, unfortunately I do resort to for loops at the end, hope someone can suggest and maybe edit for a better solution. this is the code: import numpy as np list_1 = np.array([1, 2, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 1, 2, 1, 2, 3, 1, 1, 2, 3, 4, 1]) modified_list_1 = np.append(list_1,1) # to always have a "next 1" idx_of_4 = np.where(modified_list_1==4)[0] idx_of_1 = np.where(modified_list_1==1)[0] idx_of_4_followed_by_1 = np.intersect1d(idx_of_4, idx_of_1-1) arr_slice_idx = [(start, np.min(idx_of_1[idx_of_1>(start+1)])) for start in idx_of_4_followed_by_1] for start,end in arr_slice_idx: list_1[start+1:end+1] = 0 print(list_1) I start off by finding the indices of 4s followed by 1s by using np.where and np.intersect1d, which is vectorized and should work really fast Unfortunately here I ran out of inspiration, and to find the "next 1" that closes each range I used a regular (rather ugly) comprehension. Then when I have the start and the end, I use them to slice the original array and set the value to 0.
doc_4956
Now looking at using an MVVM frame work to make life easier and I cam across Caliburn and ReactiveUI. Caliburn in this scenario is not so easy to use as it needs to be initialised at an application level in a wpf application. Does the same apply to ReactiveUI or can I make it work with a couple of wpf controls? A: ReactiveUI doesn't need to be initialized at the app level by default, it should work fine with your scenario. If it doesn't, make sure to ping the mailing list and let me know about it! A: For anyone else coming to this question in the future, Caliburn.Micro now has support for initialisation from anywhere (from version 1.1). See this discussion thread.
doc_4957
input xml: <workorder> <newwo>1</newwo> </workorder> If newwo is 1, then I have to set in my output as "NEW" else "OLD" Expected output is: newwo: "NEW" my xslt is: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" version="2.0"> <xsl:template match="/"> <xsl:apply-templates select="NEWWO" /> </xsl:template> <xsl:template match="/NEWWO"> <xsl:text>{ newwo:" </xsl:text> <xsl:choose> <xsl:when test="NEWWO != '0'">NEW</xsl:when> <xsl:otherwise>OLD</xsl:otherwise> </xsl:choose> <xsl:text>" }</xsl:text> </xsl:template> Please help me. Thanks in advance! A: I see a number of reasons you aren't getting output. * *The xpaths are case sensitive. NEWWO is not going to match newwo. *You match / and then apply-templates to newwo (case fixed), but newwo doesn't exist at that context. You'll either have to add */ or workorder/ to the apply-templates (like select="*/newwo") or change / to /* or /workorder in the match. *You match /newwo (case fixed again), but newwo is not the root element. Remove the /. *You do the following test: test="newwo != '0'", but newwo is already the current context. Use . or normalize-space() instead. (If you use normalize-space(), be sure to test against a string. (Quote the 1.)) Here's an updated example. XML Input <workorder> <newwo>1</newwo> </workorder> XSLT 1.0 <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="text"/> <xsl:template match="/*"> <xsl:apply-templates select="newwo" /> </xsl:template> <xsl:template match="newwo"> <xsl:text>{&#xA;newwo: "</xsl:text> <xsl:choose> <xsl:when test=".=1">NEW</xsl:when> <xsl:otherwise>OLD</xsl:otherwise> </xsl:choose> <xsl:text>"&#xA;}</xsl:text> </xsl:template> </xsl:stylesheet> Output { newwo: "NEW" } A: You try it as below <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" version="1.0"> <xsl:template match="/"> <xsl:choose> <xsl:when test="/workorder/newwo = 1"> <xsl:text disable-output-escaping="no"> newwo:New</xsl:text> </xsl:when> <xsl:otherwise> <xsl:text disable-output-escaping="no"> newwo:Old</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet>
doc_4958
Console.logging it seems to give me a bunch of native mongo properties ( which look like functions ie each, toArray, etc) so it seems right, but it's not a regular object with a data field that I can see. After it hits that if block with the if(docs==null), the connection gets closed and it will not execute the each block in the else if. Ideally if there was a way to help troubleshoot or figure out how to make this execute that would be great. More background: in the mongo shell I can ask for use weather // no issues and get the results of the data object which is 3000 records with an empty find(); var MongoClient = require('mongodb').MongoClient; MongoClient.connect('mongodb://localhost:27017/weather', function(err, db) { if(err){ console.log("line 7" + err); } var query = {}; var projection = { 'State' : 1, 'Temperature' : 1 }; var cursor = db.collection('data').find(query, projection); console.log("cursor" + cursor); // [object Object] var state = ''; var operator = {'$set' : {'month_high' : true } }; cursor.each(function(err, doc) { if (err) throw err; if (doc == null) { console.log("docs have value:" + doc); //NULL VALUE so will close on line 23 return db.close(); } else if (doc.State !== state) { // first record for each state is the high temp one state = doc.State; db.collection('data').update( {'_id':doc._id}, operator, function(err, updated) { if (err) console.log(err); // return db.close(); ? }); } }); }); { [MongoError: Connection Closed By Application] name: 'MongoError' } //doh { [MongoError: Connection Closed By Application] name: 'MongoError' } //doh { [MongoError: Connection Closed By Application] name: 'MongoError' } //doh A: Figuring out when to call db.close() can be a bit messy. Here it is rewritten with find().toArray() plus some logic to test when you're updating the last matched doc. This works for me. var MongoClient = require('mongodb').MongoClient; var assert = require('assert'); var Q = require('q'); MongoClient.connect('mongodb://localhost:27017/weather', function(err, db) { assert.equal(null, err); var query = {}; var projection = { 'State' : 1, 'Temperature' : 1 }; var state = ''; var operator = {'$set' : {'month_high' : true } }; var promises = []; db.collection('data').find(query, projection).toArray(function(err, docs) { assert.equal(null, err); docs.forEach(function(doc, index, arr) { var deferred = Q.defer(); promises.push(deferred.promise); if (null !== doc && state !== doc.State) { db.collection('data').update( {'_id':doc._id}, operator, function(err, updated) { assert.equal(null, err); console.log("Updated "+updated+" documents."); deferred.resolve(); }); } else { deferred.resolve(); } }); Q.all(promises).done(function() { console.log("closing"); db.close() }); }); }); EDIT: Added Q since db.close() was still called prematurely in some cases.
doc_4959
java.sql.SQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (library.book_loans, CONSTRAINT book_loans_ibfk_3 FOREIGN KEY (isbn) REFERENCES book (isbn)) Here is my code for tables | book_loans | CREATE TABLE `book_loans` ( `loan_id` int(11) NOT NULL, `isbn` varchar(10) DEFAULT NULL, `Card_ID` mediumint(9) NOT NULL AUTO_INCREMENT, `date_out` date DEFAULT NULL, `due_date` date DEFAULT NULL, `date_in` date DEFAULT NULL, PRIMARY KEY (`loan_id`), KEY `book_loans_ibfk_2` (`Card_ID`), KEY `isbn` (`isbn`), CONSTRAINT `book_loans_ibfk_2` FOREIGN KEY (`Card_ID`) REFERENCES `borrower` (`Card_ID`), CONSTRAINT `book_loans_ibfk_3` FOREIGN KEY (`isbn`) REFERENCES `book` (`isbn`) ) ENGINE=InnoDB AUTO_INCREMENT=1002 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci | book | CREATE TABLE `book` ( `isbn` varchar(10) NOT NULL, `title` varchar(500) DEFAULT NULL, PRIMARY KEY (`isbn`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci | borrower | CREATE TABLE `borrower` ( `Card_ID` mediumint(9) NOT NULL AUTO_INCREMENT, `Ssn` varchar(11) DEFAULT NULL, `Bname` varchar(50) DEFAULT NULL, `Address` varchar(100) DEFAULT NULL, `Phone` varchar(15) DEFAULT NULL, PRIMARY KEY (`Card_ID`), UNIQUE KEY `Ssn` (`Ssn`) ) ENGINE=InnoDB AUTO_INCREMENT=1002 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci A: This exception may occur when you have inserted a value for the "isbn" attribute of the "book_loans" table that is not included in the set of primary key values.
doc_4960
Possible Duplicate: JQuery to check for duplicate ids in a DOM Suppose i have a code: <div id="one"> <div id="two"></div> <div id="three"></div> </div> <div id="four"> <div id="two"></div> </div> <div id="one"> <p id="five"> <span id="three"></span> </p> </div> (a large HTML code with different DOM items). Objective: Is it possible to build a jQuery or JavaScript code that will alert me about duplication of ids within the document with the position. Here the position means like following; > duplicate id: 'div#two' > within `div#four`, `div#one` > duplicate id: 'div#one' > parent of `p#five` > duplicate id: 'span#three' > within `p#five` and such a pattern. Note: I found a problem similar to me, but not exact. As it is not duplicate of any question asked before. So don't CLOSE IT. A: If all you want to do is root out duplicate ids, you should validate your html. http://validator.w3.org/ This will alert you to duplicate ids and make sure your code is well formed. A: NOTE: Read all caveats. The point of this code is to illustrate the nature of the problem, which is that a pure JS solution is inadvisable. First of all, hopefully what this is illustrates is that sometimes things that are doable are not always advisable. There are a ton of awesome tools out there that will provide far better error checking, like W3C's validator or add-ins/extensions that utilize it, like Validity for Chrome. Definitely use those. But anyway, here's a minimalist example. Note that none of the DOM has references to its own line number, so you have to get the entire innerHTML attribute from the documentElement as a string. You match parts of that string, then break it into a substring at the match position, then count the number of carriage returns. Obviously, this code could be extensively refactored, but I think the point is clear (also jsFiddle example for those who want it, although the lines will be fubar): EDIT I've updated the regex to not match examples like <div>id="a"</div>. Still, if the OP wants something pure JS, he'll have to rely on this or a considerably more complex version with very minor benefits. The bottom line is that there are no associations between DOM nodes and line numbers. You will have to, on your own, figure out where the ID attributes are and then trace them back to their position. This is extremely error-prone. It might make some sense as programming practice but is extremely inadvisable in the real world. The best solution -- which I'm reiterating for the fourth time here -- is an extension or add-in that will just send your page on to a real validator like the W3C's. The code below is designed to "just work," because there is no good way to do what the OP is asking. <!DOCTYPE HTML> <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <title>Test</title> </head> <body> <div id="a"></div> <div id="a"></div> <!-- catches this --> <div id="b"></div> <div>id="a"</div> <div id="c"></div> <div id="c"></div> <!-- catches this --> <span>[id="a"]</span> <script> var re = /<[^>]+id="(.*?)"[^>]*>/g; // match id="..." var nl = /\n\r?/g; // match newlines var text = document.documentElement.innerHTML; var match; var ids = {}; // for holding IDs var index = 0; while (match = re.exec(text)) { // Get current position in innerHTML index = text.indexOf(match[0], index + 1); // Check for a match in the IDs array if (match[1] in ids) { // Log line number based on how many newlines are matched // up to current position, assuming an offset of 3 -- one // for the doctype, one for <html>, and one for the current // line console.log("duplicate match at line " + (text.substring(0, index).match(nl).length + 3)); } else { // Add to ID array if no match ids[match[1]] = null; } } </script> </body> </html>
doc_4961
I have a 100 javascript functions which are almost identical but every one should work with it's own 1000 of rows. I want to call necessary function or few of them at any time based on some conditions. For example, if based on condition I need to work with 5th thousand of rows of the array, I should call 5th function. One more condition: function should be called at certain time. This means that few functions can be called at the same time, that's why I can't use just one function with different arguments. I thought I can name functions as 'function1', 'function2', .., 'functionN'. But I don't know how to call them with condition. I think maybe there is a way like: if (someVar == 5) { function5(); } If there such way to call a function in javascript? I appreciate any help. A: you can assign functions to an array. ie: var funct = []; funct[5] = function (...) { ... } and then you can call them like this: if (someVar == 5) { funct[5](); } it's also good idea to use more descriptive keys for functions, you can think of it as of function names. but in this case you should use object instead of an array. ie: var funct = {}; funct['send-data'] = function (...) { ... } if (someVar == 5) { funct['send-data'](); }
doc_4962
np.random.seed(0) df1= pd.DataFrame({'key': ['A', 'B', 'C', 'D'],'id': ['2', '23', '234', '2345'], '2021': np.random.randn(4)}) df2= pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'id': ['23', '2345', '67', '45'],'2022': np.random.randn(4)}) key id 2021 0 A 2 1.764052 1 B 23 0.400157 2 C 234 0.978738 3 D 2345 2.240893 key id 2022 0 B 23 1.867558 1 D 2345 -0.977278 2 E 67 0.950088 3 F 45 -0.151357 I want to have unique keys. If key found already just update the key else insert new row. I am not sure if I have to use merge/concat/join. Can anyone give insight on this please? Note:I have used full outer join, it returns duplicate columns. Have edited the input dataframes after posting the question. Thanks! A: You can do it using merge function: df = df1.merge(df2, on='key', how='outer') df key 2021 2022 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357 EDIT In case you need to merge also for 'id': df = df1.merge(df2, on=['key','id'], how='outer') key id 2021 2022 A 2 1.764052 NaN B 23 0.400157 1.867558 C 234 0.978738 NaN D 2345 2.240893 -0.977278 E 67 NaN 0.950088 F 45 NaN -0.151357 A: I think you need create index from key and then join in concat: df = pd.concat([df1.set_index('key'), df2.set_index('key')], axis=1).reset_index() print (df) key 2021 2022 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357 A: Given your description, it looks like you want combine_first. It will merge the two datasets by replacing the duplicates in order. df2.set_index('key'). combine_first(df1.set_index('key')).reset_index() Output: key 2021 2022 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357
doc_4963
id: 2015-12-04_08_01_48-5133681620861162960 This is our 3rd pipeline to fail when either reading or writing from BigQuery. Is there an issue currently? Workflow failed. Causes: (9e23278dca02e8a7): BigQuery import job "dataflow_job_5133681620861162507" failed. Causes: (9e23278dca02ee74): BigQuery creation of import job for table "events_2015_12_05_denormalized" in dataset "PROJECT_MALBEC_DENORMALIZATION" in project "<removed>" failed. Causes: (9e23278dca02e441): Error: Message: Not found: Job roy-morgan-ua-model:dataflow_job_5133681620861162507 HTTP Code: 404 A: This is a known issue when Dataflow retries a BigQuery job after a timeout. We're actively working on a fix.
doc_4964
Goal: a web app using crowd-sourcing to semantically describe the contents of photos in photo collections -- i.e. to describe the depicted "scenes" in terms of observable subjects and objects, their respective appearances (=characteristic features), their associations, their actions, etc. -- in a textual (tag-based) manner so that a feature-based semantic search (initial) and navigation (afterwards) paradigm is supported. Undoubtedly, I'll need to enhance recent folksonomic (tagging) approaches by a semantic tech to organize and browse the contents -- a tech nearly as simple and flexible/dynamic as social tagging and powerful enough to formalize the required statements. I think most of these constructs should be supported: * *Concepts, concept types, and concept instances. Respective system-supported relations such as is-a, is-instance-of, is-subtype-of, etc. Examples: * *'Man', 'Woman' is-a 'Person', 'Person' is-a 'Animate', etc. *'Peter Parker' is-instance-of 'Man'. 'Mary Jane' is_instance_of 'Woman'. *'Brasilia' is-city-of 'Brazil' located-in 'South America'. *Concept features/properties (= typed attributes and relations) with system- and user-defined names (the diff. kinds). *[concerning the attributes]: * *Attributes of simple data type, as well as *Attributes of complex type (composites; ref. to concept or concept type), i.e. system supports 'has-part' relation. *Perhaps the distinction between 'single-' and 'multi-valued' attributes. Examples: * *'Person' has attribute 'last-name' of type 'string', 'age' of type 'int', etc. for: Concept 'Person Parker' first-name 'Peter', age '29'. *'Person' has relation 'knows' to another 'Person' , e.g. for: 'Peter Parker' knows 'Mary Jane Watson'. *'Peter Parker' wears 'body suit' of-color 'red and blue'. *[concerning the relations]: * *Mostly binary BUT also some cases of n-ary relations, e.g. ternary rel "cuts_with(Person,Object,Tool)" for expressing "Peter cuts bread with knife". So, actually we have hyper-graphs, but higher-order rels could be handled through multiple binary rels (reification). *domain and/or range restrictions for relations: e.g. Relation 'has-human-part' goes from concept 'Person' to concept 'HumanPart'. *relations on relations, in other words: secondary statements on primary statements. E.g.: "'Harry Osborn' suspects ('Peter Parker' knows 'Spiderman')" -- i.e. a combined/higher-order range. The other case: "('Plastic' x 'Metal') is-glued-by 'mySuperGlue' (instance of Glue)" -- i.e. a combined/higher-order domain. *Topology/location-based descriptions, e.g. * *'Mary Jane' stands-behind 'Peter'. *The river 'abc' is-to-the-south-of church 'xyz'. *'Shark swarm' is-on-the-upper-right (of the image). So my main question basically is: Q1: What semantic representation technology would you use for this web app context? * *Would you go with a special semantic network type? Which (optimally light-weight) type would be powerful enough? Or, would you instead go with a Semantic Web tech like RDF(S) or OWL(S)? In this case, which one would at least be needed? *Which storage kind would this selected semantic tech use or be appropriate? RDBMS, Graph DBs, or Triple Stores? Q2: Do you know any good similar project(s) you could me point to? Thank you all very much for you suggestions. A: I would always go for Schema.org since it is made by google and google is the biggest player here.
doc_4965
The first (T) is deduced based on the type of the first parameter past to the function. The second (ItrT) is deduced by use of std::type_traits and T. When I use ItrT as the type for a parameter (see function bar) than all types are deduced implicitly, but when I use std::function<void(ItrT)> as the type of a parameter (see function foo) than the correct types can only be deduced when fully specifying all template parameter(Even if using the exact same code as in the function definition). One would imagine that using ItrT inside a template wouldn't change the compilers ability to deduce the template. Why isn't this the case? And what do I need to do, so that all template parameter can be implicitly deduced? I'm using C++17. template <class T, class ItrT = typename std::iterator_traits<T>::value_type> auto foo(T iterator, std::function<void(ItrT)> expr) -> void{ } template <class T, class ItrT = typename std::iterator_traits<T>::value_type> auto bar(T iterator, ItrT expr) -> void{ } int main() { std::vector<int> vec = {1, 2, 3}; bar(vec.begin(), 1); // Compiles! foo(vec.begin(), [](int) {}); // Failes! foo<decltype(vec.begin()), std::iterator_traits<decltype(vec.begin())>::value_type> (vec.begin(), [](int) {}); // Compiles! } A: I'm guessing the confusion here is around the role of default template arguments. The rule is not: try to deduce a parameter. If deduction fails, then use the default (if provided). Rather, the rule is: if the parameter is in a deduced context, deduce it. If deduction fails, abort. If it's not in a deduced context, and it's not explicitly provided, use the default argument. In other words, the default argument is used only if the parameter is neither in a deduced context nor explicitly provided. In both your examples, ItrT is in a deduced context, so the default template argument is not considered at all. The difference between the two is that you can deduce T from a lambda (you just match its type) but you cannot deduce function<void(T) from a lambda - a lambda can be converted to an appropriate function, but a lambda is not a function. Template deduction doesn't do conversions. Template deduction just matches patterns.
doc_4966
<?xml version="1.0" encoding="UTF-8"?> <flowers> <flower name="rose"> <soilType>Podzolic</soilType> <visualParameters> <stemColor>Green</stemColor> <leafColor>Red</leafColor> <averageSize>50</averageSize> </visualParameters> <growingTips> <LightType>photophilous</LightType> <temperature>38</temperature> <watering>1200</watering> </growingTips> <multiplying>bySeeds</multiplying> <origin>Belarus</origin> <description>Classic Choice</description> </flower> </flowers> At build time I am using JAXB to generate my classes for XSD(this xsd is modified by few columns). When I try to validate it, it complains Cannot Resolve The Name 'Flowers' To A(n) 'type Definition' Component. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <xs:schema version="1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="flowers" type="Flowers"></xs:element> <xs:complexType name="flowers"> <xs:sequence> <xs:element name="flower" type="flower" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> <xs:element name="flower" type="FlowerType"/> <xs:complexType name="flower"> <xs:sequence> <xs:element name="soilType" type="SoilType" minOccurs="0"/> <xs:element name="visualParameters" type="VisualParameters" minOccurs="0"/> <xs:element name="growingTips" type="growingTips" minOccurs="0"/> <xs:element name="multiplyingType" type="multiplyingType" minOccurs="0"/> <xs:element name="origin" type="xs:string" minOccurs="0"/> <xs:element name="description" type="xs:string" minOccurs="0"/> </xs:sequence> <xs:attribute name="name" type="xs:string"/> </xs:complexType> <xs:complexType name="growingTips"> <xs:sequence> <xs:element name="temperature" type="xs:int"/> <xs:element name="watering" type="xs:int"/> </xs:sequence> <xs:attribute name="value" type="lightingType"/> </xs:complexType> <xs:complexType name="VisualParameters"> <xs:sequence> <xs:element name="stemColor" type="xs:string" minOccurs="0"/> <xs:element name="leafColor" type="xs:string" minOccurs="0"/> <xs:element name="averageSize" type="xs:int"/> </xs:sequence> </xs:complexType> <xs:simpleType name="multiplyingType"> <xs:restriction base="SoilName"/> </xs:simpleType> <xs:complexType name="SoilType"> <xs:sequence> <xs:element name="value" type="Soil" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:simpleType name="lightingType"> <xs:restriction base="xs:string"> <xs:enumeration value="photophilous"/> <xs:enumeration value="unphotophilous"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="SoilName"> <xs:restriction base="xs:string"> <xs:enumeration value="byLeafs"/> <xs:enumeration value="byCutting"/> <xs:enumeration value="bySeeds"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="Soil"> <xs:restriction base="xs:string"> <xs:enumeration value="podzolic"/> <xs:enumeration value="dirt"/> <xs:enumeration value="sodPodzolic"/> </xs:restriction> </xs:simpleType> </xs:schema> How do I setup my schema to validate successfully the xml ? A: I suspect the flowers element definition is the cause, it should probably read: <xs:complexType name="flowers"> <xs:sequence> <xs:element name="flower" type="flower" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType>
doc_4967
<LinearLayout android:animateLayoutChanges="true" > <TextView /> <TextView /> <TextView /> </LinearLayout> <LinearLayout android:animateLayoutChanges="true" > <EditText /> </LinearLayout> </LinearLayout> I want to hide the middle TextView from the upper child, and the EditText just jumps to place instead of using any animation. When I use setVisibility(View.GONE); on any of these LinearLayouts children I get an animation for this, but I want to use the animation to animate the second layout aswell. Using ObjectAnimator animation_01 = ObjectAnimator.ofFloat( view, "translationY", 0, -200 ); works like it should on the animation part, but it leaves an empty area under it. Any kind of animation programatically started seem to mess things up so I want to use the one thats working pretty well on its own. Using the children in the same parent is not possible, because it has different backgroundcolors etc (The top part is a header, which has different objects you can hide and show again with a button, setting the visibility). I want the first children to use its animation to the second child but when I choose to hide any objects from the top one, the other follow, but it has no animation. Is this possible? Solution.. for now! This worked.. part of it anyway. I might aswell use a LinearLayout because all I did was to set background color to the individual objects instead of its parent. <LinearLayout> <LinearLayout android:animateLayoutChanges="true" > <!-- blue bg --> <TextView /> <TextView /> <TextView /> <EditText /> <!-- white bg --> </LinearLayout> </LinearLayout>
doc_4968
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'order_products_tables.order_table_id' in 'where clause' (SQL: select * from `order_products_tables` where `order_products_tables`.`order_table_id` = 1 and `order_products_tables`.`order_table_id` is not null) And this error says clearly and clearly that he can not find the order_table_id column in the order_products_tables table and I am not surprised what it may sound silly because there is no such field but there is an order_id field and in migrations is described with which field is the relationship and I can not understand why Laravel tries refer to order_products_tables. Migrations Order: Schema::create('order_tables', function (Blueprint $table) { $table->increments('id'); $table->integer('user_id')->unsigned()->nullable(); $table->foreign('user_id') ->references('id') ->on('users'); $table->timestamps(); }); Migrations OrderProduct: Schema::create('order_products_tables', function (Blueprint $table) { $table->increments('id'); $table->integer('order_id')->unsigned(); $table->integer('count')->unsigned(); $table->integer('price')->unsigned(); $table->foreign('order_id') ->references('id') ->on('order_tables'); $table->timestamps(); }); As it results from the migration, the order_products_tables table stores the record ID from the order_tables table and the relationship is based on that ID. Model Order table: class OrderTable extends Model { protected $table = 'order_tables'; public function user() { return $this->belongsTo(User::class, 'id'); } public function products() { return $this->hasMany('App\OrderProductTable'); } } Model OrderProduct table: class OrderProductTable extends Model { protected $table = 'order_products_tables'; public function order() { return $this->belongsTo(OrderTable::class, 'id'); } } I do not understand why the reference to order_table_id is going. I have done other relations, eg User and Order on the same principle, and it works without a problem, suddenly here I have such a case. Where should I look for a solution and why does it wo A: This error comes from using wrong table names or, to be more correct, not defining the relationship correctly. The following relationship definitions will fix your issue: class OrderTable extends Model { protected $table = 'order_tables'; public function products() { return $this->hasMany('App\OrderProductTable', 'order_id', 'id'); } } class OrderProductTable extends Model { protected $table = 'order_products_tables'; public function order() { return $this->belongsTo(OrderTable::class, 'order_id', 'id'); } } The reason your previous code did not work is that Laravel uses default values which are basically assumptions and require your database to follow certain naming conventions. Here is a list of conventions you should follow*: * *Tables should be named after the model name in plural form and snake_case: * *Model User is supposed to have a table named users. *Model Category is supposed to have a table named categories (see the english plural). *Model AccountManager is supposed to have a table named account_managers. *If there is no model for a table, i.e. a many-to-many relationship table, the table name is expected to be in singular form and snake_case, where the model names that hold the relation are ordered alphabetically: * *If there are models Category and Product (with tables categories and products) and there is a many-to-many relationship (belongsToMany()) between them, the table for the pivot table is expected to be called order_product (and not product_order because o comes before p in the alphabet). *Foreign key columns are expected to be called after the model they represent with _id as postfix: * *When referencing the User model on a BlogPost model for example, Laravel expects a user_id column as foreign key on the BlogPost model. The referenced primary key on the User model is taken from the $primaryKey property, which is 'id' by default. For your particular scenario, this means we would expect the following models, tables and columns: * *Model User with table users and columns like in the default migration of Laravel. *Model Order with table orders and columns like id, user_id, created_at, ... *Model Product with table products and columns like id, name, price, ... *Model OrderProduct with table order_products and columns like id, order_id, product_id, quantity, ... In theory, the model OrderProduct is not necessary. You should also be able to build the same system without it by defining $this->belongsToMany(Product::class)->withPivot('quantity') on the Order model and $this->belongsToMany(Order::class)->withPivot('quantity') on the Product model (note the pivot fields). Personally, I prefer extra models for many-to-many relations though. For reference to Eloquent relationships, have a look at the documentation. There are examples for all relationship types and additional information for the additional parameters when you need to override the default table or column names for your relations. * This list may lack important information. It was created as best effort.
doc_4969
Do I need some kind of SSO login API from app.clearstorydata.com? or Do I need to just pass login and password through the API? What steps are involved if I need to implement SSO in the HTML website?
doc_4970
Transactions is a collection of documents each representing a transaction. Each document contains a reference to the name of the next document (ie the next most recent transaction). So for example, Transaction A nextDocId: 'Transaction B' Transaction B nextDocId: 'Transaction C' Transaction C nextDocId: 'Transaction D' What is the best way to load X transactions given the starting transaction? If I just pick a value for X (say 10) I could chain 10 switchMaps/concatMaps together, but is there a way to do this dynamically? I basically need to repeat an API call X times, but each call requires the response from the last call. Alternatively, is this solution even viable? I don't see any other way to maintain a sorted list in Firestore so the other option is to sort the entire list of transactions in the client each time. A: I finally got this working using expand and bufferCount. There were a few tricks, the first was to define the Firestore call in it's own function to get the recursion working as expected, private getNextTransactionRequest(txnId: string): Observable<any> { return this.firestore.collection('myCollection').doc(txnId).snapshotChanges().pipe( map(response => { return response.payload.data(); }) ); } Then, to string the calls together, public loadTransactions(headTxnId: string, n: number): Observable<any[]> { const getNextTransaction$ = this.getNextTransactionRequest(headTxnId); return getNextTransaction$.pipe( expand(txn => { if (txn) { if (txn.nextTransactionId && txn.nextTransactionId != '') { return this.getNextTransactionRequest(txn.nextTransactionId); } } }), bufferCount(n), take(1) ); } expand recursively chains the API calls together using the response from the previous call, which is exactly what I needed and bufferCount waits until the previous chain of API calls has emitted n transactions and emits that at once as an array. The one catch with bufferCount is that if nTransactions % n != 0 you will lose some transactions. To solve that I think I am just going to keep track of the total number of transactions and the total I have loaded in already. Then if nTotal - nLoaded < n I just set n = nTotal - nLoaded. A: You should have a look at the expand operator. You can use the operator as described in the excellent article (rxjs core team member) here: expand explained In the example the next page (in your case transaction) is fetched one-by-one. I try to to find a suggestion.
doc_4971
Many solutions exist, such as ctypes, swig, etc. Some of these tools, e.g., swig, have evolved significantly over the years and thus updated information can be helpful. To date, performance-wise, what is the best solution for calling existed C code from Python script? [Edit] This question is related to, but different from this question. As mentioned above, I am seeking updated info whereas that post is dated on 2008. In addition, that post is about the quickest way to do to save the developer's time, and is not for application performance as I am asking here. A: I would consider writing a C extension module. This way, you can keep the most critical parts in C, you can have the compiler optimize your code as much as possible and you can avoid converting certain values to Python objects. Cython is a nice way for writing C extensions without using C.
doc_4972
#include <QObject> #include "Logger.h" #include "PluginManager.h" class Main : QObject { Main(); ~Main(); Logger &getLogger(); signals: // Some signals public slots: // Some slots }; And now I have the PluginManager class. The constructor is: PluginManager( QObject *parent = 0 ); And I construct it in the main class like this: pluginManager = new PluginManager(this); Now, the problem: The server class needs to create the PluginManager (obviously) and the PluginManager has to get the logger from the server class and all the plugins too! parent()->getLogger(); // This doesn't work (PluginManager) Error: 'class QObject' has no member named 'getLogger' Have I to create a class and derivate it from all the classes? Please, put any example that can be helpful. Thanks in advance. A: To face compile-time dependencies there are two methods: * *Forward declare the classes you need. *Use an interface to break the cyclic dependency. Forward declarations In your PluginManager.h file you just write class Main; at the top of you file in order to forward declare Main. Then you declare the constructor of the PluginManager as PluginManager( Main * parent ); In the implementation file of the PluginManager you need to include the header which defines the Main class then. Interfaces The second option uses an interface avoids the cyclic dependency alltogether. It works like this: class MainInterface : public QObject { Q_OBJECT public: MainInterface( QObject * parent ) : QObject(parent) {} virtual ~MainInterface() {} virtual void someFunc1() = 0; virtual void someFunc2() = 0; // ... }; class PluginManager : public QObject { Q_OBJECT public: PluginManager( MainInterface * parent = 0 ) : QObject(parent) { /* ... */ } // ... other functions ... }; class Main : public MainInterface { public: Main( QObject * parent = 0 ) : MainInterface(parent) {} virtual void someFunc1(); virtual void someFunc2(); // ... }; The dependency graph now looks like this MainInterface PluginManager A A instead of A | | | | V Main PluginManager Main Your choice What alternative you want to use is your choice. If the two classes work together as an indivisible part of your program then use the easier approach of forward declaring. If you want to have these components decoupled and avoid dependencies as much as possible, then use the second approach. A: As far the error "'class QObject' has no member named 'getLogger'" is concerned, it has nothing to do with circular dependency. PluginManager( QObject *parent = 0 ) The type of parent is QObject *, which has no member getLogger(), which is a member of Main class. A: Does the logger part keeps changing in the application ? If not then while creating the PluginManager you should initialize logger also by passing an addition param, removing the later dependency of plugin manager over the server
doc_4973
Sub Button7_Click() Dim srchRng As Range Worksheets("Summary").Activate ActiveWindow.DisplayFormulas = False Set srchRng = Range("C20:C300") Dim c As Range For Each c In srchRng If c.Formula = "=#REF!" Then ActiveWorkbook.Worksheets("Aug").Columns(2).SpecialCells(xlFormulas, xlErrors).EntireRow.Delete Exit For End If Next End Sub A: Here is the solution to my question : Sub Button7_Click() Dim r As Long Application.ScreenUpdating = False Worksheets("Summary").Activate ' To Loop through rows backwards For r = 300 To 20 Step -1 ' To Check the formula in column C If Cells(r, "C").Formula = "=#REF!" Then ' To Delete row and 5 rows under it Rows(r & ":" & r + 5).Delete End If Next r Application.ScreenUpdating = True MsgBox "Done!" End Sub You could also replace this line: Rows(r & ":" & r + 5).Delete with something like this: Cells(r, "C").Resize(6, 1).EntireRow.Delete
doc_4974
Also, I am aware I shouldn't be echoing my PDO exception's but I have done this temporarily for debugging purposes. But nothing is echoed. There don't appear to be any errors. try { $db = new PDO('mysql:host=x.x.x.x;dbname=xxx', "xxx", "xxx"); } catch (PDOException $ex) { echo $ex->getMessage(); } if (isset($_POST['title'])) { try { $stmt = $db->prepare("SELECT * FROM xxxxx WHERE Title = :title;"); $stmt->bindParam(':title', $_POST['title']); $stmt->execute(); $rows = $stmt->fetchAll(); } catch (PDOException $ex) { echo $ex->getMessage(); } if (count($rows) > 0){ $result = $rows[0]; if($result['Author'] == $_SESSION['user_name']) { try { $stmt = $db->prepare("UPDATE xxxxx SET Title = :title, `Short Desc` = :short, Description = :desc, Location = :loc, Genre = :genre, Date = :date, lat = :lat, lng = :lng WHERE ID = :id and Author = :user LIMIT 1;"); $stmt->bindParam(':title', $_POST['title']); $stmt->bindParam(':short', $_POST['shortdesc']); $stmt->bindParam(':desc', $_POST['description']); $stmt->bindParam(':loc', $_POST['location']); $stmt->bindParam(':genre', $_POST['genre']); $stmt->bindParam(':date', $_POST['date']); $stmt->bindParam(':lat', $_POST['lat']); $stmt->bindParam(':lng', $_POST['lng']); $stmt->bindParam(':user', $_SESSION['user_name']); $stmt->execute(); $err = "Your ad was successfully updated."; } catch (PDOException $ex) { echo $ex->getMessage(); } } else { $err = "An ad already exists with that title."; } } else { try { $stmt = $db->prepare("INSERT INTO xxxxx (`Title`, `Short Desc`, `Description`, `Location`, `Genre`, `Date`, `Author`, `lat`, `lng`) VALUES (:title,:short,:desc,:loc,:genre,:date,:user,:lat,:lng)"); $stmt->bindParam(':title', $_POST['title']); $stmt->bindParam(':short', $_POST['shortdesc']); $stmt->bindParam(':desc', $_POST['description']); $stmt->bindParam(':loc', $_POST['location']); $stmt->bindParam(':genre', $_POST['genre']); $stmt->bindParam(':date', $_POST['date']); $stmt->bindParam(':lat', $_POST['lat']); $stmt->bindParam(':lng', $_POST['lng']); $stmt->bindParam(':user', $_SESSION['user_name']); $stmt->execute(); $err = "Your ad was successfully added to our database."; } catch (PDOException $ex) { echo $ex->getMessage(); } } }
doc_4975
A: WinForms was never good at this and it's a bit of a pain. One way you can try is by embedding a TextBox in a Panel and then manage the drawing based on focus from there: public class BorderTextBox : Panel { private Color _NormalBorderColor = Color.Gray; private Color _FocusBorderColor = Color.Blue; public TextBox EditBox; public BorderTextBox() { this.DoubleBuffered = true; this.Padding = new Padding(2); EditBox = new TextBox(); EditBox.AutoSize = false; EditBox.BorderStyle = BorderStyle.None; EditBox.Dock = DockStyle.Fill; EditBox.Enter += new EventHandler(EditBox_Refresh); EditBox.Leave += new EventHandler(EditBox_Refresh); EditBox.Resize += new EventHandler(EditBox_Refresh); this.Controls.Add(EditBox); } private void EditBox_Refresh(object sender, EventArgs e) { this.Invalidate(); } protected override void OnPaint(PaintEventArgs e) { e.Graphics.Clear(SystemColors.Window); using (Pen borderPen = new Pen(this.EditBox.Focused ? _FocusBorderColor : _NormalBorderColor)) { e.Graphics.DrawRectangle(borderPen, new Rectangle(0, 0, this.ClientSize.Width - 1, this.ClientSize.Height - 1)); } base.OnPaint(e); } } A: You can handle WM_NCPAINT message of TextBox and draw a border on the non-client area of control if the control has focus. You can use any color to draw border: using System; using System.Drawing; using System.Runtime.InteropServices; using System.Windows.Forms; public class ExTextBox : TextBox { [DllImport("user32")] private static extern IntPtr GetWindowDC(IntPtr hwnd); private const int WM_NCPAINT = 0x85; protected override void WndProc(ref Message m) { base.WndProc(ref m); if (m.Msg == WM_NCPAINT && this.Focused) { var dc = GetWindowDC(Handle); using (Graphics g = Graphics.FromHdc(dc)) { g.DrawRectangle(Pens.Red, 0, 0, Width - 1, Height - 1); } } } } Result The painting of borders while the control is focused is completely flicker-free: BorderColor property for TextBox In the current post I just change the border color on focus. You can also add a BorderColor property to the control. Then you can change border-color based on your requirement at design-time or run-time. I've posted a more completed version of TextBox which has BorderColor property: in the following post: * *BorderColor property for TextBox A: Using OnPaint to draw a custom border on your controls is fine. But know how to use OnPaint to keep efficiency up, and render time to a minimum. Read this if you are experiencing a laggy GUI while using custom paint routines: What is the right way to use OnPaint in .Net applications? Because the accepted answer of PraVn may seem simple, but is actually inefficient. Using a custom control, like the ones posted in the answers above is way better. Maybe the performance is not an issue in your application, because it is small, but for larger applications with a lot of custom OnPaint routines it is a wrong approach to use the way PraVn showed. A: try this bool focus = false; private void Form1_Paint(object sender, PaintEventArgs e) { if (focus) { textBox1.BorderStyle = BorderStyle.None; Pen p = new Pen(Color.Red); Graphics g = e.Graphics; int variance = 3; g.DrawRectangle(p, new Rectangle(textBox1.Location.X - variance, textBox1.Location.Y - variance, textBox1.Width + variance, textBox1.Height +variance )); } else { textBox1.BorderStyle = BorderStyle.FixedSingle; } } private void textBox1_Enter(object sender, EventArgs e) { focus = true; this.Refresh(); } private void textBox1_Leave(object sender, EventArgs e) { focus = false; this.Refresh(); } A: This is an ultimate solution to set the border color of a TextBox: public class BorderedTextBox : UserControl { TextBox textBox; public BorderedTextBox() { textBox = new TextBox() { BorderStyle = BorderStyle.FixedSingle, Location = new Point(-1, -1), Anchor = AnchorStyles.Top | AnchorStyles.Bottom | AnchorStyles.Left | AnchorStyles.Right }; Control container = new ContainerControl() { Dock = DockStyle.Fill, Padding = new Padding(-1) }; container.Controls.Add(textBox); this.Controls.Add(container); DefaultBorderColor = SystemColors.ControlDark; FocusedBorderColor = Color.Red; BackColor = DefaultBorderColor; Padding = new Padding(1); Size = textBox.Size; } public Color DefaultBorderColor { get; set; } public Color FocusedBorderColor { get; set; } public override string Text { get { return textBox.Text; } set { textBox.Text = value; } } protected override void OnEnter(EventArgs e) { BackColor = FocusedBorderColor; base.OnEnter(e); } protected override void OnLeave(EventArgs e) { BackColor = DefaultBorderColor; base.OnLeave(e); } protected override void SetBoundsCore(int x, int y, int width, int height, BoundsSpecified specified) { base.SetBoundsCore(x, y, width, textBox.PreferredHeight, specified); } } A: set Text box Border style to None then write this code to container form "paint" event private void Form1_Paint(object sender, PaintEventArgs e) { System.Drawing.Rectangle rect = new Rectangle(TextBox1.Location.X, TextBox1.Location.Y, TextBox1.ClientSize.Width, TextBox1.ClientSize.Height); rect.Inflate(1, 1); // border thickness System.Windows.Forms.ControlPaint.DrawBorder(e.Graphics, rect, Color.DeepSkyBlue, ButtonBorderStyle.Solid); } A: With PictureBox1 .Visible = False .Width = TextBox1.Width + 4 .Height = TextBox1.Height + 4 .Left = TextBox1.Left - 2 .Top = TextBox1.Top - 2 .SendToBack() .Visible = True End With A: Here is my complete Flat TextBox control that supports themes including custom border colors in normal and focused states. The control uses the same concept mentioned by Reza Aghaei https://stackoverflow.com/a/38405319/5514131 ,however the FlatTextBox control is more customizable and flicker-free. The control handles the WM_NCPAINT window message in a better way to help eliminate flicker. Protected Overrides Sub WndProc(ByRef m As Message) If m.Msg = WindowMessage.WM_NCPAINT AndAlso _drawBorder AndAlso Not DesignMode Then 'Draw the control border Dim w As Integer Dim h As Integer Dim clip As Rectangle Dim hdc As IntPtr Dim clientRect As RECT = Nothing GetClientRect(Handle, clientRect) Dim windowRect As RECT = Nothing GetWindowRect(Handle, windowRect) w = windowRect.Right - windowRect.Left h = windowRect.Bottom - windowRect.Top clip = New Rectangle(CInt((w - clientRect.Right) / 2), CInt((h - clientRect.Bottom) / 2), clientRect.Right, clientRect.Bottom) hdc = GetWindowDC(Handle) Using g As Graphics = Graphics.FromHdc(hdc) g.SetClip(clip, CombineMode.Exclude) Using sb = New SolidBrush(BackColor) g.FillRectangle(sb, 0, 0, w, h) End Using Using p = New Pen(If(Focused, _borderActiveColor, _borderNormalColor), BORDER_WIDTH) g.DrawRectangle(p, 0, 0, w - 1, h - 1) End Using End Using ReleaseDC(Handle, hdc) Return End If MyBase.WndProc(m) End Sub I have removed the default BorderStyle property and replaced it with a simple boolean DrawBorder property that controls whether to draw a border around the control or not. Use the BorderNormalColor property to specify the border color when the TextBox has no focus, and the BorderActiveColor property to specify the border color when the control receives focus. The FlatTextBox comes with two themes VS2019 Dark and VS2019 Light, use the Theme property to switch between them. Complete FlatTextBox control code written in VB.NET https://gist.github.com/ahmedosama007/37fe2004183a51a4ea0b4a6dcb554176
doc_4976
(For people familiar with Pinescript, this column will replicate the result of this Pinescript function): df['st_trendup'] = np.select(df['Close'].shift() > df['st_trendup'].shift(),df[['st_up','st_trendup'.shift()]].max(axis=1),df['st_up']) The problem occurs in the true part of the np.select()because I cannot call .shift() on a string. * *Normally, I would make a new column that uses .shift() beforehand but since this is recursive, I have to do it all in one line. *If possible I'd like to avoid using loops for speed; prefer solutions using native pandas or numpy functions. What I am looking for A way to find max function that can accomodate a .shift() call Columns that are used: def tr(high,low,close1): return max(high - low, abs(high - close1), abs(low - close1)) df['st_closeprev'] = df['Close'].shift() df['st_hl2'] = (df['High']+df['Low'])/2 df['st_tr'] = df.apply(lambda row: tr(row['High'],row['Low'],row['st_closeprev']),axis=1) df['st_atr'] = df['st_tr'].ewm(alpha = 1/pd,adjust=False,min_periods=pd).mean() df['st_up'] = df['st_hl2'] - factor * df['st_atr'] df['st_dn'] = df['st_hl2'] + factor * df['st_atr'] df['st_trendup'] = np.select(df['Close'].shift() > df['st_trendup'].shift(),df[['st_up','st_trendup'.shift()]].max(axis=1),df['st_up']) Sample data obtained by the df.to_dict {'Date': {0: Timestamp('2021-01-01 09:15:00'), 1: Timestamp('2021-01-01 09:30:00'), 2: Timestamp('2021-01-01 09:45:00'), 3: Timestamp('2021-01-01 10:00:00'), 4: Timestamp('2021-01-01 10:15:00'), 5: Timestamp('2021-01-01 10:30:00'), 6: Timestamp('2021-01-01 10:45:00'), 7: Timestamp('2021-01-01 11:00:00'), 8: Timestamp('2021-01-01 11:15:00'), 9: Timestamp('2021-01-01 11:30:00'), 10: Timestamp('2021-01-01 11:45:00'), 11: Timestamp('2021-01-01 12:00:00'), 12: Timestamp('2021-01-01 12:15:00'), 13: Timestamp('2021-01-01 12:30:00'), 14: Timestamp('2021-01-01 12:45:00'), 15: Timestamp('2021-01-01 13:00:00'), 16: Timestamp('2021-01-01 13:15:00'), 17: Timestamp('2021-01-01 13:30:00'), 18: Timestamp('2021-01-01 13:45:00'), 19: Timestamp('2021-01-01 14:00:00'), 20: Timestamp('2021-01-01 14:15:00'), 21: Timestamp('2021-01-01 14:30:00'), 22: Timestamp('2021-01-01 14:45:00'), 23: Timestamp('2021-01-01 15:00:00'), 24: Timestamp('2021-01-01 15:15:00'), 25: Timestamp('2021-01-04 09:15:00')}, 'Open': {0: 31250.0, 1: 31376.0, 2: 31405.0, 3: 31389.4, 4: 31377.5, 5: 31347.8, 6: 31310.8, 7: 31343.4, 8: 31349.5, 9: 31349.9, 10: 31325.1, 11: 31310.9, 12: 31329.0, 13: 31376.0, 14: 31375.5, 15: 31357.4, 16: 31325.0, 17: 31341.1, 18: 31300.0, 19: 31324.5, 20: 31353.3, 21: 31350.0, 22: 31346.9, 23: 31330.0, 24: 31314.3, 25: 31450.2}, 'High': {0: 31407.0, 1: 31425.0, 2: 31411.95, 3: 31389.45, 4: 31382.0, 5: 31350.0, 6: 31354.6, 7: 31359.0, 8: 31370.0, 9: 31364.7, 10: 31350.0, 11: 31337.9, 12: 31378.9, 13: 31419.5, 14: 31377.75, 15: 31360.0, 16: 31367.15, 17: 31345.2, 18: 31340.0, 19: 31367.0, 20: 31375.0, 21: 31370.0, 22: 31350.0, 23: 31334.6, 24: 31329.6, 25: 31599.0}, 'Low': {0: 31250.0, 1: 31367.95, 2: 31352.5, 3: 31331.65, 4: 31301.4, 5: 31303.05, 6: 31310.0, 7: 31325.05, 8: 31335.35, 9: 31315.35, 10: 31281.9, 11: 31292.0, 12: 31316.25, 13: 31352.05, 14: 31335.0, 15: 31322.0, 16: 31318.25, 17: 31261.55, 18: 31283.3, 19: 31324.5, 20: 31322.0, 21: 31332.15, 22: 31324.1, 23: 31300.15, 24: 31280.0, 25: 31430.0}, 'Close': {0: 31375.0, 1: 31398.3, 2: 31386.0, 3: 31377.0, 4: 31342.3, 5: 31311.7, 6: 31345.0, 7: 31349.0, 8: 31344.2, 9: 31327.6, 10: 31311.3, 11: 31325.6, 12: 31373.0, 13: 31375.0, 14: 31357.4, 15: 31326.0, 16: 31345.9, 17: 31300.6, 18: 31324.4, 19: 31353.8, 20: 31345.6, 21: 31341.6, 22: 31332.5, 23: 31311.0, 24: 31285.0, 25: 31558.4}, 'Volume': {0: 259952, 1: 163775, 2: 105900, 3: 99725, 4: 115175, 5: 78625, 6: 67675, 7: 46575, 8: 53350, 9: 54175, 10: 96975, 11: 80925, 12: 79475, 13: 147775, 14: 38900, 15: 64925, 16: 52425, 17: 142175, 18: 81800, 19: 74950, 20: 68550, 21: 40350, 22: 47150, 23: 119200, 24: 222875, 25: 524625}} A: Change: df[['st_up','st_trendup'.shift()]].max(axis=1) to: df[['st_up','st_trendup']].assign(st_trendup = df['st_trendup'].shift()).max(axis=1)
doc_4977
A: Assuming you have access to the QScrollBar you want to move, you can do it like this: Use a single-shot QTimer to define how often you move the scroll. Connect the timeout() signal of the timer to a slot written by you which calls the QScrollBar setSliderPosition method. If you need to scroll further, restart the QTimer. The slot you connect the timer's signal to would look like: void scroll() { new_scroll_offset = ...; // compute scroll offset here (possibly from old scroll offset) scroll_bar->setSliderPosition(new_scroll_offset); if (/* can scroll further */) { timer->start(); } } It can also be done with a multiple shot timer in a similar fashion
doc_4978
So i asked to myself: if "int" is a datatype, why i cannot push "ADTs" to my own stack. Then i came with this code: #include <iostream> class Person { std::string name; int age; public: Person(std::string pName = "", int pAge = 1) { name = pName; age = pAge; } void Print() { std::cout << name << " " << age << std::endl; } }; class Stack { Person * stack; int size, top; int index; public: Stack(int stackSize) { top = stackSize -1; index = top; stack = new Person[stackSize]; } void push(Person person) { if (index < 0) std::cout << "Stack UNDERFLOW" << "Index is: " << index << std::endl; stack[index--] = person; } Person & pop() { if (index > top) { std::cout << "Stack OVERFLOW" << std::endl; } return stack[++index]; } }; I know, there are stacks, queues, vectos, etc in the STL lib. I just wanted to do it by myself. I want the stack push a copy of the object. I'm not sure i don't know if the compiler is pushing addresses, copying the whole object (what is what i want) or what. Please enlight me. Here is my main() code: int main() { Stack stack(100); Person person("Lucas", 39); for (int i = 0; i < 100; i++) { stack.push(person); ((Person)stack.pop()).Print(); } return EXIT_SUCCESS; } A: To answer your question about copies, this: stack[index--] = person; makes a copy, because the type on both sides of the assignment is of type T. This: stack.push(person); also makes a copy, because you are passing person by value. To avoid this (redundant) copy, declare push as: void push(const T &person) A: Well, what i did to solve my question was what @PaulMcKenzie said in his comment. I created a template for the "Stack" class and any template "T" is the datatype that is passed to the this class. About the main methond (in my question) it had an unnecesary cast to (Person) since it's implied. Edit 2: @Paul Sanders were right too, i was doing a redundant copy at push() This way i solved my problem: #include <iostream> class Person { std::string name; int age; public: Person(std::string pName = "", int pAge = 1) { name = pName; age = pAge; } void Print() { std::cout << name << " " << age << std::endl; } }; template <class T> class Stack { T * stack; int size, top; int index; public: Stack(int stackSize) { top = stackSize -1; index = top; stack = new T[stackSize]; } void push(const T &person) { if (index < 0) std::cout << "Stack UNDERFLOW" << std::endl; stack[index--] = person; } T pop() { if (index > top) { std::cout << "Stack OVERFLOW" << std::endl; } return stack[++index]; } }; int main() { Stack<Person> stack(100); Person person1("Lucas", 39); Person person2("Gustavo", 38); for (int i = 0; i < 100; i++) { if (i % 2 == 0) stack.push(person1); else stack.push(person2); } for (int i = 0; i < 100; i++) stack.pop().Print(); return EXIT_SUCCESS; } In the example of the main() function, it creates an stack of Person objects of 100. Then i create 2 people: "Lucas" and "Gustavo" and push it intercalated to the my stack for 100 times (the first for statment). Then the second and final for statement pop() all values and print them.
doc_4979
List scores = Stream.concat(oldEntries.stream(), newEntries.stream()) .sorted() .distinct() .limit(maxSize) .collect(Collectors.toList()); I am expecting a sorted list without any duplicates, but some times there is duplicate in the list. I have override the hashCode and equals method, I have also observed that these methods are returning the correct value every time. Can any one see what is wrong with my stream? This is my equals() and hashCode() They are auto generated by IDEA : .. private int userId; private int levelId; private int score; @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Score score = (Score) o; if (userId != score.userId) return false; return levelId == score.levelId; } @Override public int hashCode() { int result = userId; result = 31 * result + levelId; return result; } public int compareTo(Score other) { if (other == null) { return 1; } else { return Integer.compare(other.score, this.score); } } .. A: It is a bug. The documentation of Stream.distinct() simply says: Returns a stream consisting of the distinct elements (according to Object.equals(Object)) of this stream. For ordered streams, the selection of distinct elements is stable (for duplicated elements, the element appearing first in the encounter order is preserved.) For unordered streams, no stability guarantees are made. There is no requirement that for ordered streams the equal objects should come right after each other (consecutively). However the implementation seems to assume they do. What the documentation means is that the first occurrence of user 2, level 3 should be preserved and the second occurrence discarded. According to the Java bug database the bug exists up to Java 13 and remains unresolved. Links * *Documentation of Stream.distinct() *JDK-8223933 Stream.distinct() sometimes allows duplicates when the stream is sorted in the JDK Bug System. A: Your stream is first ordered according to compareTo, i.e. using score. It's then "distinctified" using equals(), i.e. using userId and levelId. According to the javadoc: For ordered streams, the selection of distinct elements is stable (for duplicated elements, the element appearing first in the encounter order is preserved.) For unordered streams, no stability guarantees are made. Example: score 1, user 2, level 3 score 3, user 2, level 3 score 1, user 3, level 1 After sorting... score 1, user 2, level 3 score 1, user 3, level 1 score 3, user 2, level 3 Distinct now does nothing, because the elements are not equal according to user/level. This can result in "duplicate" elements, because you're ordering the stream based on one thing, but determining equality by an entirely different thing.
doc_4980
How can I select a list of n red points for each green marker according to the above explanation. Thanks in advance. This is my code, and I mentioned the coordinate of red points and green markers int it. %% Network Setup anchor_num=1; % Number of anchor node node_num=20; % Total nodes length1=70; % Area length anchor_x=0; % Intial position of anchor x coordinate anchor_y=0; % Intial position of anchor y coordinate anchormove=[];% Anchor trajectory width=40; % Area width r = 30; A = zeros(0,2); B = zeros(0,2); C = zeros(0,2); D = zeros(0,2); north = [ 0 6.9]; east = [ 6.9 0]; south = [ 0 -6.9]; west = [-6.9 0]; order = 4; for n = 1:order AA = [B ; north ; A ; east ; A ; south ; C]; BB = [A ; east ; B ; north ; B ; west ; D]; CC = [D ; west ; C ; south ; C ; east ; A]; DD = [C ; south ; D ; west ; D ; north ; B]; A = AA; B = BB; C = CC; D = DD; end % Plot network trajectory %Mtrix A contains the coordinate of red markers. A = [0 0; cumsum(A)] p=plot(A(:,1),A(:,2)) title('Plot of Hilbert trajectory'); set(p,'Color','magenta ','LineWidth',2); axis([0 100 0 100]); hold on % x and y are the coordinates of green markers x=rand(1,100)*100; y=rand(1,100)*100; scatter(x,y) anchormove(1,:)=A(:,1)' anchormove(2,:)=A(:,2)' idx=length(anchormove(1,:)); for i=1:idx-1 % Plot the moving anchor node Ax=anchormove(1,i); Ay=anchormove(2,i); plot(Ax,Ay,'r*'); % Plot transmission range of the anchor node axis([0 100 0 100]) % hold on pause(0.1) %hold off end A: If you don't have the statistics and machine learning toolbox you can do so by hand. To find all "red" points (from your code it seems they are contained in A) that are within range R from a specific green point (x(i),y(i)), you can use w = sqrt(sum((A - [x(i),y(i)]).^2,2)) <= R; if you have Matlab >=R2016, otherwise w = sqrt(sum((A - repmat([x(i),y(i)],size(A,1),1)).^2,2)) <= R; Then, w is a logical array containing logical 1 for all anchor points within range R of [x(i),y(i)]. You can use logical indexing àla A(w,:) to retrieve them. For instance, plot(A(w,1),A(w,2),'ks') will plot them with a different marker. If you need to do this for all your green points jointly, the code becomes W = sqrt(sum(abs((reshape(A,size(A,1),1,2) - reshape([x;y]',1,length(x),2)).^2),3)) <= R; on Matlab>=R2016. Now, W is a matrix where its rows are the red points and the columns are the green markers, containing a logical 1 if a pair is within radius R and 0 otherwise. You can for instance use any(W,2) to check whether the red points are within reach of any of the green markers. For Matlab before R2016 you need to modify the above with some repmat magic: W = sqrt(sum(abs((repmat(reshape(A,size(A,1),1,2),1,length(x),1) - repmat(reshape([x;y]',1,length(x),2),size(A,1),1,1)).^2),3)) <= R; A: You can use rangesearch(X,Y,radius). It returns a cell array where the cells contain the indexes of the points X within a given radius for each point Y. Since the number of nearby points can vary for each Y, the number of indexes per cell may vary. So in your case: % turn the two x and y vectors into [x y] column format. GreenPoints = [x;y].'; % get indexes of the points A for each Green point within 5 distance idx = rangesearch(A,GreenPoints,5); Or shorter: idx = rangesearch(A,[x;y].',5);
doc_4981
function SubmitCheckBoxes() { alert("test"); var selectedIDs = []; var x = 0; a = document.getElementsByTagName("input"); for (i = 0; i < a.length; i++) { if (a[i].type == "checkbox" ) { if (a[i].checked) { alert(a[i].value); selectedIDs[x] = a[i].value; x++; } } } $.post('./Courses/SaveAndRedirect', selectedIDs, function (data) { }); } However when I look at the form data being submitted all its says is undefined: undefined for each element in the array. Not sure what the problem is here. A: It is the data attribute in the jquery post method that is wrong. It can't take an array as a parameter. Here is the documentation data map or string that is sent to the server with the request. Try using a object literal instead: $.post('./Courses/SaveAndRedirect', {selectedIDs: selectedIDs}, function (data) { }); I would also try writing selectedIDs[x] = a[i].value; differently: selectedIDs.push(a[i].value); A: I think the problem may be that your post variables are associated with just a numeric instance rather than a field name A: You can try something like this: var selectedIDs = []; $('input[type="checkbox"]:checked').forEach(function(i, e){ selectedIDs.push($(e).val()); }); $.post('./Courses/SaveAndRedirect', selectedIDs, function (data) { });
doc_4982
However, I have the following error. pyodbc.ProgrammingError: ('42000', '[42000] [Sybase][ODBC Driver]Syntax error or access violation (0) (SQLPrepare)') Could you help to solve it? My code - import pandas as pd import pyodbc as db from datetime import datetime #set constants DSN = 'DSN' input_table = 'table' run_timestamp = str(datetime.now())[:19] test_date_start = '2020-09-09' test_date_end = '2025-08-08' input_data = pd.DataFrame({ 'model':['aaa','aaa'], 'result_type':['a','test_statistic'], 'test_name':['b', 'mwb'], 'input_variable_name':['c','pd'], 'segment':['car','book'], 'customer_type':['le','le'], 'value':[60, 0.58], 'del_flag':[0,0] }) query = 'insert into schema.table (data_input_time,test_date_start,test_date_end,model,result_type,test_name,input_variable_name,segment,customer_type,value,del_flag) values (?,?,?,?,?,?,?,?,?,?,?)' cnxn = db.connect(DSN) cursor = cnxn.cursor() cursor.execute('SETUSER MYUSERNAME') for row_count in range(0, input_data.shape[0]): #1 method chunk = input_data.iloc[row_count:row_count + 1, :].values.tolist() tuple_of_tuples = tuple(tuple(x) for x in chunk) cursor.executemany(query, tuple_of_tuples) #2 method params = [(i,) for i in chunk] #f'txt{i}' cursor.executemany(query, params) A: You're using a 'f' string, but forgot the brackets. It should look like this query = f'''insert into schema.table ({data_input_time},{test_date_start},{test_date_end,model},{result_type,test_name},{input_variable_name},{segment,customer_type},{value,del_flag}) values ('?,?,?,?,?,?,?,?,?,?,?)'''
doc_4983
If anyone can find out, what adds the white colour between the lines in the drop-down menu, i would very much appreciate it! Because no style on the page, gives that white border, and i just want to get rid of it! http://www.ny-webdesign.dk/bio7 This is the link, so if anyone can find the source to the colour, I would very much appreciate it! <a> <li> <ul> Those are the tags I think contains the border, but I am not sure! This is the link, so if anyone can find the source to the colour, I would very much appreciate it! Thank you! A: It's in #header #navigation ul.nav > li.parent:hover a:before. If I set it to content: none;, there appears no border. A: If you add the below it disappears. ul.sub-menu{ border: none !important; }
doc_4984
Are there any extensions to Hpricot, or perhaps a flag I need to set, that will allow HTML5 documents to be parsed correctly? A: I know it kind of works around the direct question but I would suggest you try Nokogiri http://nokogiri.org/ as mentioned in some of the comments on your question post. I've had no issues with it parsing any HTML/XML like structured text, including HTML5. A: I think Hpricot's to_original_html method is exactly what you're looking for. From the docs, to_original_html Attempts to preserve the original HTML of the document, only outputing new tags for elements which have changed.
doc_4985
Example message from my env. communication between the FIX initiator and FIX_STUB(acceptor) : 16:49:58.475 [http-8080-4] INFO quickfixj.msg.outgoing - FIXT.1.1:FIX->FIX_STUB: 8=FIXT.1.1|9=267|35=RQS|34=3|49=FIX|52=20160628-13:49:58.474|56=FIX_STUB|20000=1|20001={json string}|20002=1.0|10=171| <20160628-13:49:58, FIXT.1.1:FIX_STUB->FIX, incoming> (8=FIXT.1.1|9=267|35=RQS|34=3|49=FIX|52=20160628-13:49:58.474|56=FIX_STUB|20000=1|20001={json string}|20002=1.0|10=171|) 16:49:58.476 [QFJ Message Processor] INFO c.r.fix.api.stub.FixApplication - FIX STUB MESSAGE TYPE:quickfix.fix50sp2.Request <20160628-13:49:58, FIXT.1.1:FIX_STUB->FIX, outgoing> (8=FIXT.1.1|9=308|35=RSP|34=3|49=FIX_STUB|52=20160628-13:49:58.527|56=FIX|20000=1|20001={json string}||20002=1.0|10=240|) 16:49:58.528 [NioProcessor-2] INFO quickfixj.msg.incoming - FIXT.1.1:FIX->FIX_STUB: 8=FIXT.1.1|9=308|35=RSP|34=3|49=FIX_STUB|52=20160628-13:49:58.527|56=FIX|20000=1|20001={json string}||20002=1.0|10=240| 16:49:58.529 [QFJ Message Processor] INFO c....fix.engine.FixEngineImpl - FIX MESSAGE TYPE:quickfix.fix50sp2.Response on Tomcat is working but when we try to use the exactly same code in a test environment and deploy to a Websphere server I get this error: 2016-06-28 11:17:44,196 appl="rtv" env="SYS" version="3.8.12" loglevel="INFO " message="FIXT.1.1:FIX->FIX_STUB: 8=FIXT.1.19=26735=RQS34=249=FIX52=20160628-09:17:44.19656=FIX_STUB20000=120001={json string}20002=1.010=147" thread="WebContainer : 1" logger="quickfixj.msg.outgoing" 2016-06-28 11:17:44,198 appl="rtv" env="SYS" version="3.8.12" loglevel="INFO " message="FIX STUB MESSAGE TYPE:quickfix.fix50sp2.Message" thread="QFJ Message Processor" logger="c.r.fix.api.stub.FixApplication" 2016-06-28 11:17:44,202 appl="rtv" env="SYS" version="3.8.12" loglevel="ERROR" message="FIX STUB MESSAGE CRACK FAILED" thread="QFJ Message Processor" logger="c.r.fix.api.stub.FixApplication" quickfix.UnsupportedMessageType: null at quickfix.fix50sp2.MessageCracker.onMessage(MessageCracker.java:39) ~[quickfixj-messages-all-1.6.2.jar:1.6.2] at quickfix.fix50sp2.MessageCracker.crack50(MessageCracker.java:1787) ~[quickfixj-messages-all-1.6.2.jar:1.6.2] at quickfix.fix50sp2.MessageCracker.crack(MessageCracker.java:1555) ~[quickfixj-messages-all-1.6.2.jar:1.6.2] at com....fix.api.stub.FixApplication.fromApp(FixApplication.java:97) ~[rita-web-3.8.jar:na] As you can see in the example the type(message sub class) it's handled correctly (quickfix.fix50sp2.Request,quickfix.fix50sp2.Response ) but on Websphere is NOT (quickfix.fix50sp2.Message). MessageCracker is used. Can you help me to fix this issue, to find out why this is happening is related to OS, web server or some FIX configuration? A: I fixed the issue... it was related to classloader and duplicate classes. The way that I got the jar files contained also the default messages(maven dependencies ...) on Tomcat the application pick up the right classes while the WebSphere took the default one this is why the application MessageCracker didn't handled properly the messages.
doc_4986
All machine keys, decryption keys, cookie domains, etc, are all the same. No code was modified. We simply applied the 4.5.1 update to those two servers. I've noticed for the 4.0 server, while our authentication cookie IS being passed up, it's not being recognized by the code (I threw a test page on the site and outputted some variables). Has machine key encryption changed between 4.0 and 4.5.1? Do I need to apply some patch? Thanks. A: The encryption APIs have changed in 4.5 see this blog post for more detail - http://blogs.msdn.com/b/webdev/archive/2012/10/23/cryptographic-improvements-in-asp-net-4-5-pt-2.aspx According to MS simply upgrading to 4.5 wouldn't affect the apps unless the web.configs have changed to target 4.5 which seems odd in your case. You can try adding this attribute to each web.config to force .NET 2.0 compatibility- <machineKey compatibilityMode="Framework20SP1" />
doc_4987
OriginalClass+Extension.h: @interface OriginalClass (Extension) { NSArray *_array; } @property (nonatomic, retain) NSArray *array; @end But I got the error: Cannot declare variable inside @interface or @protocol. Please help to solve the problem. A: As the other stated, you can't. Although has H2CO3 pointed out, you can use associative references. On Apple Documents: Note that a category can’t declare additional instance variables for the class; it includes only methods. However, all instance variables within the scope of the class are also within the scope of the category. That includes all instance variables declared by the class, even ones declared @private. If you want to go for associated object, you can use this answer. Moreover, you can use this post by Ole Begemann. A: You can't, a category can't declare additional instance variables ... Reference : here . A: Simple: you can't add instance variables to a class using a category. If you need to store additional data: use associated objects.
doc_4988
QUERY select nr from table1 inner join table2 on table2.nr = table1.nr where table1.nr in (select nr from table2 where columnn like '%value%') and nr in (select nr from table2 where columnn like '%other value%') When I only use first subquery I get results, but with the second subquery in it I don't A: Use OR instead of AND select nr from table1 inner join table2 on table2.nr = table1.nr where table1.nr in (select nr from table2 where columnn like '%value%') or nr in (select nr from table2 where columnn like '%other value%') And join is useless if it is exact same query that u use. Elegant way is select nr from table1 inner join table2 on table2.nr = table1.nr where CONTAINS(table2.column, '"*value*" OR "*other value*"')
doc_4989
org.springframework.web.servlet.ModelAndView and org.springframework.web.portlet.ModelAndView both the ModelAndViews have almost same methods. A significant difference to notice was when I added object to org.springframework.web.portlet.ModelAndView, the object was not able to reach to the view. In the view the added object was null. Do you people know any other significant difference??!! Please add your information here :) In general the question can also be put as to spot the difference in org.springframework.web.servlet.*; org.springframework.web.portlet.*; A: Well, they are exactly similar, except that *.servlet.* classes are tailored for classical web applications based on servlets, while *.portlet.* one are specially tailored for JSR-168 portlets. This is a deliberate choice from Spring : As much as possible, the Portlet MVC framework is a mirror image of the Web MVC framework, and also uses the same underlying view abstractions and integration technology. But a portlet in much different from a servlet. You could find references on the JSR-168 and a nice presentation on What Is a Portlet - O'Reilly Media. Here are some extracts from the latter : Portlets are web components--like servlets--specifically designed to be aggregated in the context of a composite page. Usually, many portlets are invoked to in the single request of a portal page. Each portlet produces a fragment of markup that is combined with the markup of other portlets, all within the portal page markup. [Windows for different applications are] developed independently of each other. The developer of the news portlet will create an application and pack it into a .war file. Then the administrator of the portal server will install this .war file on the server and create a page. In the next stage, every user will choose which applications he wants on his page. For that reason, Spring portlet classes are very different from portlet ones, even when they present same interface. The main way in which portlet workflow differs from servlet workflow is that the request to the portlet can have two distinct phases: the action phase and the render phase. The action phase is executed only once and is where any backend changes or actions occur, such as making changes in a database. The render phase then produces what is displayed to the user each time the display is refreshed. TL/DR : So *portlet* classes have been specialy designed to present a similar interface to the developper (same as *servlet*) but are quite different under the hood and must not be used in classical (servlet) SpringMVC applications.
doc_4990
I have this table created on my cassandra 2.0 im trying to read and write with it using astyanax but unfortunately im really new to cassandra and java create table requestformatlist ( usedbyid text, // rowkey rfid text , // rfid keystoadd Set<text>, keystoremove Set<text>, primary key (usedbyid, rfid) ); table contents gid | rfid | keystoadd | keystoremove ------+------+----------------------------------+---------------------------- 1111 | 1111 | {'AddMe1', 'AddMe2'} | {'RemoveMe1', 'RemoveMe2'} 0003 | 0003 | {'address', 'name', 'state'} | {'z1', 'z2', 'z3'} I have read this but unfortunately Im confused why do we need two writes if we want to add just one row http://brianoneill.blogspot.com/2012/09/composite-keys-connecting-dots-between.html http://brianoneill.blogspot.com/2012/10/cql-astyanax-and-compoundcomposite-keys.html i have the astyanaxdao and i could connect with it but cannot read and write to it i have the following code public void testRead() throws Exception { RequestFormatDAO dao = new RequestFormatDAO("localhost:9160", "l2"); log(dao.read("0003")); } public ColumnList<RequestFormatListEntry> read(String rowKey) throws ConnectionException { OperationResult<ColumnList<RequestFormatListEntry>> result = this.getKeyspace().prepareQuery(COLUMN_FAMILY).getKey(rowKey) .execute(); ColumnList<RequestFormatListEntry> requestFormat = result.getResult(); LOG.debug("Read list [" + rowKey + "]"); return requestFormat; } public class RequestFormatListEntry { @Component(ordinal = 0) public String rfid; @Component(ordinal = 1) public String field1; public RequestFormatListEntry(){} } Results that I have on Read: Read list [0003] listEntry.rfid=>[0003] listEntry.keystoremove=>[null] listEntry.rfid=>[0003] listEntry.keystoremove=>[keystoadd] listEntry.rfid=>[0003] listEntry.keystoremove=>[keystoadd] listEntry.rfid=>[0003] listEntry.keystoremove=>[keystoadd] listEntry.rfid=>[0003] listEntry.keystoremove=>[keystoremove] listEntry.rfid=>[0003] listEntry.keystoremove=>[keystoremove] listEntry.rfid=>[0003] listEntry.keystoremove=>[keystoremove] A: Your problem is that you're trying to access your CQL ("high-level" API) table using Thrift ("low-level" API). Your shiny CQL table (and you even use CQL3 collections) looks like the sample below, if you're using Thrift. [default@test] list requestformatlist; Using default limit of 100 Using default cell limit of 100 ------------------- RowKey: 1111 => (name=1111:, value=, timestamp=1396662079753000) => (name=1111:keystoadd:4164646d6531, value=, timestamp=1396662079753000) => (name=1111:keystoadd:4164646d6532, value=, timestamp=1396662079753000) => (name=1111:keystoremove:52656d6f766531, value=, timestamp=1396662079753000) => (name=1111:keystoremove:52656d6f766532, value=, timestamp=1396662079753000) ------------------- RowKey: 0003 => (name=0003:, value=, timestamp=1396662495503000) => (name=0003:keystoadd:61646472657373, value=, timestamp=1396662495503000) => (name=0003:keystoadd:6e616d65, value=, timestamp=1396662495503000) => (name=0003:keystoadd:7374617465, value=, timestamp=1396662495503000) => (name=0003:keystoremove:7a31, value=, timestamp=1396662495503000) => (name=0003:keystoremove:7a32, value=, timestamp=1396662495503000) => (name=0003:keystoremove:7a33, value=, timestamp=1396662495503000) 2 Rows Returned. Elapsed time: 26 msec(s). [default@test] And that explains the results you get from your DAO. I'm not an Astyanax expert, but probably you should take a look at https://github.com/Netflix/astyanax/wiki/Cql-and-cql3 and don't mess up CQL and Thrift. A couple of links for further reading: http://www.datastax.com/dev/blog/thrift-to-cql3 http://thelastpickle.com/blog/2013/01/11/primary-keys-in-cql.html
doc_4991
mydomain.com/index.php?eIDSR=sr_freecap_EidDispatcher&id=9781&vendorName=SJBR&extensionName=SrFreecap&pluginName=ImageGenerator&controllerName=ImageGenerator&actionName=show&formatName=png&L=0&set=571e0 When i call this url manually i get the whole page and not the image. Is eIDSR correct? I was in the opinion that the correct call should be eID= ... I can't find information about it. Any help appreciated! A: I missed the existing bug report: https://forge.typo3.org/issues/89735 I tried the above solution and it works: Put in your extension in which you implements sr_freecap this file: /your-extension/Configuration/RequestMiddlewares.php with the following content: <?php return [ 'frontend' => [ 'srfreecap-eidhandler' => [ 'target' => \SJBR\SrFreecap\Middleware\EidHandler::class, 'before' => [ 'typo3/cms-frontend/content-length-headers', ], ] ] ]; this will work. Seems like a necaissary feature which is not mentioned in the manual.
doc_4992
This is the jist of it $(document).ready(function() { $.post("myscript", { Action: "JQueryReq", }, function(data){ alert(data); }); }); If I do the above I get back everything I want and it looks like this (in the JS dialog box) [{"val1":null,"val2":null,"val3":null,"Size":"Inches","valu4":null}] But if I change alert(data); to alert(data.Size); I just get "undefined" I also tried var myjsonreturn = eval(data); alert(myjsonreturn.Size); I also tried var myjsonreturn = eval('('+data+')'); alert(myjsonreturn.Size); And every time I get undefined. What am I doing wrong? TIA A: What is data? Is it a string? If so, you want to use: eval('('+data+')')[0].Size; A: Tried this? alert(data[0].Size) A: What you're getting back as a JSON response is an array with just one cell. Because the array's length is 1 the index will start from the number 0, so you can access the contents like this: alert(data[0].Size); Or, if you want to loop through the values with jQuery's .each(): $.each(data[0], function(index, value){ alert(index + ':' + value); });
doc_4993
swiftmailer: transport: gmail host: smtp.gmail.com username: 'ringleaderr@gmail.com' password: '****' Below I set program from controller-> <?php namespace PsiutekBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Symfony\Component\HttpFoundation\Request; class BasketController extends Controller { public function koszykAction() { return $this->render('PsiutekBundle:Basket:koszyk.html.twig'); } public function SendMailAction() { $Request=$this->get('request_stack')->getCurrentRequest(); if($Request->getMethod()=="POST"){ $subject=$Request->get("Subject"); print_r($subject); exit; $email=$Request->get("email"); $body=$Request->get("message"); print_r($body); $transport=\Swift_SmtpTransport::newInstance('smtp.gmail.com',465,'ssl') ->setUsername('ringleaderr@gmail.com') ->setPassword('******'); $mailer=\Swift_Mailer::newInstance($transport); $message = \Swift_Message::newInstance('Web Lead') ->setSubject($subject) ->setTo($email) ->setBody($body); $result=$mailer->send($message); } return $this->render('PsiutekBundle:Basket:koszyk.html.twig'); } } A: To send email within Symfony you should use mailer service. Since your controller extends Controller you can get it directly from container. So, your action should look like: public function sendMail() { $mailer = $this->get('mailer'); //getting mailer from container $message = \Swift_Message::newInstance('Web Lead') ->setSubject($subject) ->setTo($email) ->setBody($body); $result=$mailer->send($message); } There is special howto in cookbook covering this topic. NOTE: consider moving sending email logic (in fact any logic) outside of your controller class. NOTE 2: I assume you are aware of exit in your method which will terminate method execution.
doc_4994
var client = new cql.Client({hosts: ['*.*.*.*'], keyspace: '*', username:'*', password: '*'}); console.log('connected to ' , client); console.log('Querying....'); client.execute('select * from example where field1=?', [1], function(err, result) { console.log('inside', result); if (err) console.log('execute failed',err); else console.log('got chat ' + result.rows[0].field1); client.shutdown(); } ); I am using this code, the execute() callbacks aren't getting called . To test I used an incorrect IP address, it immediately responds and this line console.log('execute failed',err) logs what is below. execute failed { [PoolConnectionError] name: 'PoolConnectionError', info: 'Represents a error while trying to connect the pool, all the connections failed.', individualErrors: [ { Error: getaddrinfo ENOTFOUND at errnoException (dns.js:28:10) at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:79:26) code: 'ENOTFOUND', errno: 'ENOTFOUND', syscall: 'getaddrinfo', hostname: '*.*.*.*' host: '*.*.*.*' port: 9042 } ] } If with right IP address nothing happens may because new cql.Client internally calls connect (asynchronously) before even connection is made execute is attempted ? All perfectly works in CQLSH, my servers are in AWS US west coast. Any inputs welcome. A: You are using the legacy Cassandra driver node-cassandra-cql, as the project readme states, it is no longer maintained: node-cassandra-cql has graduated from community driver to being the foundation of the official Datastax Node.js Driver for Apache Cassandra. There will be no more development in this repository. I encourage everyone to start migrating to the new driver as soon as you can, it's got some great new features that you should try out, along with an improved cql to javascript type mapping for prepared statements. Use DataStax Node.js driver instead: npm install cassandra-driver --save
doc_4995
using (AmazonS3Client s3client = new AmazonS3Client( ConfigurationManager.AppSettings["s3accesskey"], ConfigurationManager.AppSettings["s3secret"])) { PutObjectRequest putObjectRequest = new PutObjectRequest { BucketName = rootBucket, Key = key, InputStream = content }; s3client.PutObject(putObjectRequest); It is throwing below error Cannot close stream until all bytes are written. please advise A: This happens because the stream is at the end. Just set the position of your stream back to 0 and it'll work. Hope that helps! A: it seems you can also get this same exception intermittently when S3 times out while uploading. you can fix that by increasing the timeout, e.g.: new AmazonS3Client( ConfigurationManager.AppSettings["s3accesskey"], ConfigurationManager.AppSettings["s3secret"], new AmazonS3Config { Timeout = TimeSpan.FromMinutes(30), ReadWriteTimeout = TimeSpan.FromMinutes(30) } )
doc_4996
public class Order { public virtual EntityId Id { get; } public virtual OrderNumber OrderNumber { get; private set; } public virtual DateTime Created { get; } public virtual string Note { get; private set; } = null; public virtual IEnumerable<ProductSale> Products => _products; private IList<ProductSale> _products = new List<ProductSale>(); protected Order() { } public Order(EntityId id, OrderNumber orderNumber, DateTime created, string note = null, IEnumerable<ProductSale> products = null) { Id = id; ChangeOrderNumber(orderNumber); Created = created; if (products != null) { AddProducts(products); } Note = note; } public virtual void ChangeOrderNumber(string orderNumber) { OrderNumber = orderNumber; } public virtual void ChangeNote(string note) { Note = note; } public virtual void AddProducts(IEnumerable<ProductSale> products) { // some logic and operations } public virtual void AddProduct(ProductSale product) { // some logic and operations } public virtual void RemoveProduct(ProductSale product) { // some logic and operations } } public sealed class ProductSale { public virtual EntityId Id { get; } public virtual Order Order { get; private set; } = null; public virtual ProductSaleState ProductSaleState { get; private set; } = ProductSaleState.New; // enum protected ProductSale() { } public ProductSale(EntityId id, ProductSaleState productSaleState, Order order) { Id = id; Order = order; ProductSaleState = productSaleState; } public virtual void AddOrder(Order order) { // logic } public virtual void RemoveOrder() { // logic } } My value objects: public class EntityId : IEquatable<EntityId> { public virtual Guid Value { get; protected set; } protected EntityId() { } public EntityId(Guid value) { // validation Value = value; } public static EntityId Create() => new(Guid.NewGuid()); public override bool Equals(object obj) { return Equals(obj as EntityId); } public bool Equals(EntityId other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return Value == other.Value; } public override int GetHashCode() { return GetEqualityComponents() .Select(x => x != null ? x.GetHashCode() : 0) .Aggregate((x, y) => x ^ y); } private IEnumerable<object> GetEqualityComponents() { yield return Value; } } public sealed class OrderNumber : IEquatable<OrderNumber> { public virtual string Value { get; protected set; } public OrderNumber(string productName) { ValidProductName(productName); Value = productName; } public override bool Equals(object obj) { return Equals(obj as OrderNumber); } public bool Equals(OrderNumber other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return Value == other.Value; } public override int GetHashCode() { return GetEqualityComponents() .Select(x => x != null ? x.GetHashCode() : 0) .Aggregate((x, y) => x ^ y); } private IEnumerable<object> GetEqualityComponents() { yield return Value; } private static void ValidProductName(string productName) { // validation } } My mappings: public sealed class OrderConfiguration : ClassMapping<Order> { public OrderConfiguration () { Table("Orders"); ComponentAsId(a => a.Id, map => { map.Access(Accessor.ReadOnly); map.Property(id => id.Value, prop => { prop.Access(Accessor.ReadOnly); prop.Column(nameof(Order.Id)); }); }); Component(a => a.OrderNumber, map => { map.Property(ad => ad.Value, prop => { prop.Access(Accessor.ReadOnly); prop.Column(nameof(Order.OrderNumber)); }); }); Property(a => a.Created, prop=> { prop.Access(Accessor.ReadOnly); prop.Column(nameof(Order.Created)); }); Property(a => a.Note, map => map.Column(nameof(Order.Note))); Bag(o => o.ProductSales, map => { map.Table("ProductSales"); map.Key(k => k.Column(col => col.Name("OrderId"))); }, map => map.OneToMany()); } } public class ProductSaleConfiguration : ClassMapping<ProductSale> { public ProductSaleConfiguration() { Table("ProductSales"); ComponentAsId(a => a.Id, map => { map.Access(Accessor.ReadOnly); map.Property(id => id.Value, prop => { prop.Access(Accessor.ReadOnly); prop.Column(nameof(ProductSale.Id)); }); }); Property(p => p.ProductSaleState, map => { map.Column(nameof(ProductSale.ProductSaleState)); map.Type<EnumStringType<ProductSaleState>>(); }); ManyToOne(p => p.Order, map => { map.Column("OrderId"); }); } } When I inserted object for example order, EntityId was always set as null. I havent used NHibernate as ORM so far, so I couldnt check if mappings were correct. Maybe there is something missing. I used .Net 6, SQLite database. A: I solved this problem by implementing own custom IUserType. It is really strange that NHibernate has some problems with mapping wrapped guid as an identitfier. If someone is interested here is implementation public sealed class EntityIdConfigurationType : IUserType { public SqlType[] SqlTypes => new[] { SqlTypeFactory.Guid }; public Type ReturnedType => typeof(EntityId); public bool IsMutable => false; public object Assemble(object cached, object owner) => DeepCopy(cached); public object DeepCopy(object value) => value; public object Disassemble(object value) => DeepCopy(value); public new bool Equals(object x, object y) { if (ReferenceEquals(x, y)) { return true; } if (x == null || y == null) { return false; } return x.Equals(y); } public int GetHashCode(object x) => x.GetHashCode(); public object NullSafeGet(DbDataReader rs, string[] names, ISessionImplementor session, object owner) { var obj = NHibernateUtil.Guid.NullSafeGet(rs, names[0], session); if (obj is null) return null; var id = (Guid)obj; if (id == Guid.Empty) return null; return new EntityId(id); } public void NullSafeSet(DbCommand cmd, object value, int index, ISessionImplementor session) { if (value is null) { object nullValue = DBNull.Value; NHibernateUtil.Guid.NullSafeSet(cmd, nullValue, index, session); return; } var type = value.GetType(); if (type == typeof(Guid)) { NHibernateUtil.Guid.NullSafeSet(cmd, value, index, session); return; } EntityId entityId = value as EntityId; object valueToSet; if (entityId != null) { valueToSet = entityId.Value; } else { valueToSet = DBNull.Value; } NHibernateUtil.Guid.NullSafeSet(cmd, valueToSet, index, session); } public object Replace(object original, object target, object owner) => original; } and mappings public sealed class OrderConfiguration : ClassMapping<Order> { public AdditionConfiguration() { Table("Orders"); Id(p => p.Id, map => { map.Column(nameof(Order.Id)); map.Type<EntityIdConfigurationType>(); }); Component(a => a.OrderNumber, map => { map.Property(ad => ad.Value, prop => { prop.Access(Accessor.ReadOnly); prop.Column(nameof(Order.OrderNumber)); }); }); Property(a => a.Created, prop=> { prop.Access(Accessor.ReadOnly); prop.Column(nameof(Order.Created)); }); Property(a => a.Note, map => map.Column(nameof(Order.Note))); Bag(o => o.ProductSales, map => { map.Table("ProductSales"); map.Key(k => k.Column(col => col.Name("OrderId"))); }, map => map.OneToMany()); } } public class ProductSaleConfiguration : ClassMapping<ProductSale> { public ProductSaleConfiguration() { Table("ProductSales"); Id(p => p.Id, map => { map.Column(nameof(ProductSale.Id)); map.Type<EntityIdConfigurationType>(); }); Property(p => p.ProductSaleState, map => { map.Column(nameof(ProductSale.ProductSaleState)); map.Type<EnumStringType<ProductSaleState>>(); }); ManyToOne(p => p.Order, map => { map.Column("OrderId"); }); } }
doc_4997
I can read the dictionary items in the array just fine, e.g. [[[stories objectAtIndex: b] objectForKey: @"title"] But, now I am trying to update (i.e. replace) a couple of objects e.g. "title" & "matchingword", but I cannot find the right code. Any suggestions are much appreciated. I tried this, but it seems to be adding entirely new objects to the array NSMutableDictionary *itemAtIndex = [[NSMutableDictionary alloc]init]; [itemAtIndex setObject:[[placesArray objectAtIndex:a]objectAtIndex:0] forKey:@"reference"]; [stories replaceObjectAtIndex:x withObject:itemAtIndex]; // replace "reference" with user's unique key [itemAtIndex release]; I also tried this (but didn't work either): //NSMutableDictionary *itemAtIndex2 = [[NSMutableDictionary alloc]init]; //[itemAtIndex2 setObject:[separatePlaces objectAtIndex:x] forKey:@"matchingword"]; //[stories insertObject:itemAtIndex2 atIndex:x]; // add the unique matching word to story //[itemAtIndex2 release]; Help appreciated. Thanks. A: You need to grab the dictionary you want to mod. NSMutableDictionary *temp = [stories objectAtIndex: b]; The change the value: [temp setObject:@"new Info" forKey:@"title"];
doc_4998
Here's a button for example: import React from 'react'; class Button extends React.Component { render() { return <button className={this.props.bStyle}>{this.props.title}</button>; } } export default Button; I have lots of other elements to put into react components so there's going to be a large list of them. The problem is that what you have lots of them and you need to import all the list can really grow too big. Just imagine 50 of these: import Element from './Element.component'; // x50 ? My question is...Is their a better approach to importing large lists of components in React? A: You can import all of your elements to one file and export all individually. Then you are able to import all as elements and use as elements.someComponent. // common.js because you ll commonly use them import Element from './Element.component'; import Element2 from './Element.component2'; // x50 ? /* ... */ export { Element, Element2 /* ... */ }; // in someOtherFile.js import * as Elements from './common'; /* now you are able to use these common elements as <Elements.Element /> <Elements.Element2 /> ... */
doc_4999
PMD: Amazon ENA We have a DPDK application that only calls rte_eth_rx_burst() (we do not transmit packets) and it must process the payload very quickly. The payload of a single network packet MUST be in contiguous memory. The DPDK API is optimized around having memory pools of fixed-size mbufs in memory pools. If a packet is received on the DPDK port that is larger than the mbuf size, but smaller than the max MTU then it will be segmented according to the figure below: This leads us the following problems: * *If we configure the memory pool to store large packets (for example max MTU size) then we will always store the payload in contiguous memory, but we will waste huge amounts memory in the case we receive traffic containing small packets. Imagine that our mbuf size is 9216 bytes, but we are receiving mostly packets of size 100-300 bytes. We are wasting memory by a factor of 90! *If we reduce the size of mbufs, to let's say 512 bytes, then we need special handling of those segments in order to store the payload in contiguous memory. Special handling and copying hurts our performance, so it should be limited. My final question: * *What strategy is recommended for a DPDK application that needs to process the payload of network packets in contiguous memory? With both small (100-300 bytes) and large (9216) packets, without wasting huge amounts of memory with 9K-sized mbuf pools? Is copying segmented jumbo frames into a larger mbuf the only option? A: There are a couple of ways involving the use of HW and SW logic to make use of multiple-size mempool. via hardware: * *If the NIC PMD supports packet or metadata (RX descriptor) parsing, one can use RTE_FLOW RAW to program the flow direction to a specific queue. Where each can be set up with desired rte_mempool. *IF the NIC PMD does not support parsing of metadata (RX descriptors) but the user is aware of specific protocol fields like ETH + MPLS|VLAN or ETH + IP + UDP or ETH + IP + UDP + Tunnel (Geneve|VxLAN); one can use RTE_FLOW to distribute the traffic over specific queues (which has larger mempool object size). thus making default traffic to fall on queue-0 (which has smaller mempool object size) *if hardware option of flow bifurcate is available, one can set the RTE_FLOW with raw or tunnel headers to be redirect to VF. thus PF can make use of smaller object mempool and VF can make use of larger size mempool. via software: (if HW supported is absent or limited) * *Using RX callback (rte_rx_callback_fn), one can check mbuf->nb_segs > 1 to confirm multiple segments are present and then use mbuf_alloc from larger mempool, attach as first segment and then invoke rte_pktmbuf_linearize to move the content to first buffer. *Pre set all queue with large size mempool object, using RX callback check mbuf->pktlen < [threshold size], if yes alloc mbuf from smaller pool size, memcpy the content (pkt data and necessary metadata) and then swap the original mbuf with new mbuf and free the original mbuf. Pros and Cons: * *SW-1: this costly process, as multiple segment access memory is non-contiguous and will be done for larger size payload such as 2K to 9K. hardware NIC also has to support RX scatter or multi-segment too. *SW-2: this is less expensive than SW-1. As there is no multiple segments, the cost can be amortized with mtod and prefetch of payload. note: in both cases, the cost of mbuf_free within RX-callback can be reduced by maintaining a list of original mbufs to free. Alternative option-1 (involves modifying the PMD): * *modify the PMD code probe or create to allocate mempool for large and small objects. *set MAX elements per RX burst as 1 element *use scalar code path only *Change recv function to * *check the packet size from RX descriptor *comment the original code to replenish per threshold *check the packet size via reading packet descriptor. *alloc for either a large or small size mempool object. [edit-1] based on the comment update DPDK version is 22.03 and PMD is Amazon ENA. Based on DPDK NIC summary and ENA PMD it points to * *No RTE_FLOW RSS to specific queues. *No RTE_FLOW_RAW for packet size. *In file in function ena_rx_queue_setup; it supports individual rte_mempool Hence current options are * *Modify the ENA PMD to reflect support for multiple mempool size *Use SW-2 for rx_callback to copy smaller payload to new mbuf and swap out. Note: There is an alternate approach by * *creating an empty pool with external mempool *Use modified ENA PMD to get pool objects as single small buffers or multiple continuous pool objects. Recommendation: Use a PMD or Programmable NIC which can bifurcate based on Packet size and then RTE_FLOW to a specific queue. To allow multiple CPU to process multiple flow setup Q-0 as default small packets, and other queues with RTE_FLOW_RSS with specific mempool.