text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
26 October 2012 09:19 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Under the agreement, BSP will supply around 20m barrels (approximately 2.75m tonnes) of crude a year to Hengyi’s The Hengyi Petrochemical said in April this year that an 8m tonne/year refinery will be built in the first phase of the BSP is a 50:50 joint venture between Shell
http://www.icis.com/Articles/2012/10/26/9607598/bruneis-shell-to-supply-crude-to-hengyi-industrys-brunei-project.html
CC-MAIN-2014-52
en
refinedweb
Hey I'm required to look through an array of boolean values, and return True if they are all set to be true, and False if at least one is set to be false. Essentially, i need to do this: Code Java: boolean[] test = {true,true,false,true}; return (test[0] && test [1] && test[2] && test[3]); but i need to do it in a (for?) loop, because the size of the array is not known beforehand. The array is already initialized elsewhere in the program, i only included it for the example, I only need to return the value off the collective array (if any are false, return false). I have tried a couple various ways to accomplish it but none of them are working. Perhaps this is a problem someone is familiar with? Thanks! Edit: SOLVED IT. All i had to do was make a separate method that only handled the looping operation, and i was able to get the desired result.
http://www.javaprogrammingforums.com/%20collections-generics/22420-need-way-find-way-dermine-if-all-elements-boolean-array-true-false-printingthethread.html
CC-MAIN-2014-52
en
refinedweb
Servlet Program Sir, I need fetch the data from the database and store the result in user created HTML page. how to way I write this program? Query How to retrieve different-2 data from two databases in a single JSP Page? thanks thanks for this example but please send me more and more examples to learn java struts + hibernate + database connectivity Can i have code for retreiving data from database using struts and hibernate sir plz describe in brief becouse i m not understand in where action will be perform plz send html code and xml code thanku servlets servlets how can I run java servlet thread safety program using tomcat server? please give me a step by step procedure to run the following program my program is A DEMO PROGRAM FOR THREAD SAFETY. package serv; import servlets servlets hi i am doing one servlet program in which i strucked... the student details in a servlet and stores that into one resultset object 3)forward...; Please visit the following link: Servlets Servlets when i am compiling the following servlet program it compiles the successfully.but when i try to run the program it gives the following... the solution for this problem.And how can we deploy the servlet in Tomcat  Servlets Servlets when i deployed the following servlet program in tomcat i... javax.servlet.ServletException: Error instantiating servlet class InsertServlet...) java.lang.Thread.run(Unknown Source Here is the servlet code: import java.io. Servlets Program Servlets Program Hi, I have written the following servlet: [code] package com.nitish.servlets; import javax.servlet.*; import java.io.*; import... executed the program, it gave me error as follows execution - JSP-Servlet servlets execution hello friend, thanks for the reply.. the link which u `ve provided contains the hello world program which has got html embedded in it. i want to know how to execute a servlet in which html is written
http://www.roseindia.net/tutorialhelp/allcomments/3262
CC-MAIN-2014-52
en
refinedweb
By Ian TeGrootenhuis On April 21, 2021 Page Builder by Kentico Xperience is a very powerful tool for developers and content administrators. Giving this much power to content admins can be a little worrisome when you’re used to more structured content, but giving your content admins the freedom to customize their pages without calling you for redesigns will keep them happy. I’m going to highlight some of our custom generic column section and our carousel and configurator widgets below. Instead of having multiple sections cluttering the section selection menu to handle all the client requirements, we made a generic column section. The section properties has a simple dropdown for the content admin to select the desired number of columns. In the section’s view, we set the Bootstrap class to the colCount divided by twelve to obtain even Widget Zones. In the case of the admin selecting five columns, Bootstrap rounds down the “2.4” to 2. The result is dynamic and provides a lot of customization to the content admin without flooding them with too many section options. We wanted to provide the client with a highly diversifiable carousel widget, so each carousel fits their needs. We’re using SwiperJs for its vast amount of customizations and ease of use. If you look at those Swiper demos, it would’ve been easy for us to go overboard with the widget properties. Instead, we contained it to what the client specifically wanted and some nice to haves. Using the PathSelector widget property type allowed us to mix structured content and the flexibility of page builder. Before creating the carousel widget, the content admin would need to create the slides by populating a folder with our Slide page type. Then add in the carousel widget and edit the properties to select the folder they just created. In the CarouselWidgetController class (below), we’re using the selected path to get all the children in that carousel folder and its children (slides). using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using CMS.DocumentEngine; using Kentico.PageBuilder.Web.Mvc; using LuciferLighting.Constants; using LuciferLighting.Controllers.Widgets; using LuciferLighting.Models.Carousel; using LuciferLighting.Models.Generated; using LuciferLighting.Models.Widgets.Carousels; [assembly: RegisterWidget( carouselWidget.IdentifierController, typeof( CarouselWidgetController ), carouselWidget.NameController, IconClass = carouselWidget.IconClassController )] namespace LuciferLighting.Controllers.Widgets { public class CarouselWidgetController : WidgetController<carouselwidgetproperties> { public ActionResult Index( ) { var properties = GetProperties(); CarouselFolder carousel = new CarouselFolder(); List<slide> slides = new List<slide>(); string selectedPagePath = properties.PagePaths?.FirstOrDefault()?.NodeAliasPath; TreeNode page = DocumentHelper.GetDocuments() .Path( selectedPagePath ) .OnCurrentSite() .TopN( 1 ) .FirstOrDefault(); var childrenNodes = DocumentHelper.GetDocuments<carouselslide>() .WhereEquals( "NodeParentID", page.NodeID ) .Columns( "ImageSelector", "VideoLink", "Caption", "CaptionLocation" ) .OrderBy( "NodeOrder" ) .Published() .ToList(); foreach( var node in childrenNodes ) { Slide slide = new Slide { Caption = node.Caption, CaptionLocation = node.CaptionLocation, ImageSelector = node.ImageSelector, VideoLink = node.VideoLink }; slides.Add( slide ); } carousel.Header = properties.Header; carousel.SubHeader = properties.SubHeader; carousel.Slides = slides; carousel.IsUsingArrows = properties.IsUsingArrows; carousel.IsWhiteArrows = properties.IsWhiteArrows; carousel.IsUsingButtons = properties.IsUsingButtons; carousel.IsThreeSlideCarousel = properties.IsThreeSlideCarousel; carousel.Id = properties.Id; return PartialView( carouselWidget.ViewNameController, carousel ); } } } Building clean reusable widgets and sections will speed up your development for future projects and give your content administrators the confidence to create the pages they need..
https://www.bizstream.com/blog/april-2021/xperience-page-builder-complex-sections-widgets
CC-MAIN-2021-21
en
refinedweb
Version 1.5.1¶ OEChem 1.5.1¶ New features¶ - Added new OEAssignFormalCharges function that operates on a single atom instead of just a version for the entire molecule. - Renamed the OESystem::ParamVis namespace to OEParamVisibility to make it consistent with other OpenEye namespaces. Mayor bug fixes¶ - Fixed two bugs in kekulization of large molecules. First, some large molecules would fail kekulization when they were actually ok. Second, even when they were kekulized correctly, the method would still return false. - This tweaks the MDL mol file reader to use the test dimension != 3 instead of “dimension == 2” when deciding to honor the wedge/hash bonds or to determine the chirality from 3D coordinates. The subtle difference is that at the point this code is called, the dimension is not necessarily “2” or “3” if the (optional) header line is missing. If the header has been omitted, we treat the molfile like a 2D file (which it most probably is). - Changed SMARTS parser to allow a TAB character (\t) to be treated as a separator following a SMARTS pattern. This reflects similar functionality in the SMILES parser and simplifies the task of writing “patty”-like applications.
https://docs.eyesopen.com/toolkits/python/oechemtk/releasenotes/version1_5_1.html
CC-MAIN-2021-21
en
refinedweb
Using transition components without external libraries vue2-transitions The transitions components Vue 2 Transitions for Vue.js allows you to create transitions in various ways, utilizing this configurable collection. Each transition component has ~2kb (non-minified js+css or ~400 bytes gzipped) and you can import only the ones you really need. Many alternative solutions import the whole animate.css library. Vue2-transitions is minimalistic and lets you import only the transitions that you need in your app. List of available transitions - FadeTransition - ZoomCenterTransition - ZoomXTransition - ZoomYTransition - ZoomUpTransition - CollapseTransition - ScaleTransition - SlideXLeftTransition - SlideXRightTransition - SlideXUpTransition - SlideXDownTransition Props props: { /** * Transition duration. Number for specifying the same duration for enter/leave transitions * Object style {enter: 300, leave: 300} for specifying explicit durations for enter/leave */ duration: { type: [Number, Object], default: 300 }, /** * Whether the component should be a `transition-group` component. */ group: Boolean, /** * Transform origin property. * Can be specified with styles as well but it's shorter with this prop */ origin: { type: String, default: '' }, /** * Element styles that are applied during transition. These styles are applied on @beforeEnter and @beforeLeave hooks */ styles: { type: Object, default: () => { return { animationFillMode: 'both', animationTimingFunction: 'ease-out' } } } } Group transitions Each transition can be used as a transition-group by adding the group prop to one of the desired transitions. <fade-transition group> <!--keyed children here--> </fade-transition> Gotchas/things to watch: - Elements inside grouptransitions should have display: inline-blockor must be placed in a flex context: Vue.js docs reference Each transition has a moveclass move class docs. Unfortunately, the duration of the move transition cannot be configured through props. By default each transition has a moveclass associated with .3stransition duration: - Zoom .zoom-move{ transition: transform .3s ease-out; } - Slide .slide-move{ transition: transform .3s ease-out; } - Scale .scale-move{ transition: transform .3s cubic-bezier(.25,.8,.50,1); } - Fade .fade-move{ transition: transform .3s ease-out; } If you want to configure the duration, just redefine the class for the transition you use with the desired duration. Example To start working with the Vue Swiper use the following command(s) to install it. npm i vue2-transitions yarn add vue2-transitions in a Webpack setup import Vue from 'vue' import Transitions from 'vue2-transitions' Vue.use(Transitions) Usage: Use the component anywhere you would like in the template: <template> <scale-transition> <div class="box" v- <p>Your transition</p> </div> </scale-transition> </template> <script> export default { name: 'app', data(){ return { show: true } }, methods: { toogle(){ this.show = !this.show } } } </script> The above markup is an example of a ScaleTransition. That's it! If you would like to get started with Vue Transitions, head to the project's repository on GitHub, where you will also find the source code.
https://vuejsfeed.com/blog/using-transition-components-without-external-libraries
CC-MAIN-2021-21
en
refinedweb
Problem : I want to learn the Machine Learning but I am unabe to resolve below error. My Specs : · Mac High Sierra 10.13.2 · Python3.4.5 · Numpy1.13.3 Used below Command: $ python3 -c "import jupyter, matplotlib, numpy, pandas, scipy, sklearn" I am facing below Error: RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb Traceback (most recent call last): File "/Users/uekyo/ml/env/lib/python3.4/site-packages/pandas/__init__.py", line 36, in <module> from pandas._libs import (hashtable as _hashtable, File "/Users/uekyo/ml/env/lib/python3.4/site-packages/pandas/_libs/__init__.py", line 8, in <module> from .tslib import iNaT, NaT, Timestamp, Timedelta, OutOfBoundsDatetime File "pandas/_libs/tslib.pyx", line 2, in init pandas._libs.tslib ImportError: numpy.core.multiarray failed to import During handling of the above exception, another exception occurred: File "<string>", line 2, in <module> File "/Users/uekyo/ml/env/lib/python3.4/site-packages/pandas/__init__.py", line 40, in <module> "the C extensions first.".format(module)) ImportError: C extension: numpy.core.multiarray failed to import not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first. The reson behind your error is most likely due to the version of numpy too low; below code solved my problem: pip3 install "numpy == 1.15.0" --user conda install couldnt solve it because currently it only have the numpy version 1.13.1, but is may be due to the my mirror site choosen is not the latest.
https://kodlogs.com/34330/runtimeerror-module-compiled-against-api-version-0xc-but-this-version-of-numpy-is-0xb
CC-MAIN-2021-21
en
refinedweb
This is part of a series of Leetcode solution explanations (index). If you liked this solution or found it useful, please like this post and/or upvote my solution post on Leetcode's forums. Leetcode Problem #823 (Medium): Binary Trees With Factors Description: (Jump to: Solution Idea || Code: JavaScript | Python | Java | C++) Given 10^9 + 7. Examples: Constraints: - 1 <= arr.length <= 1000 - 2 <= arr[i] <= 10^9 Idea: (Jump to: Problem Description || Code: JavaScript | Python | Java | C++) The trick to this problem is realizing that we can break it down into smaller pieces. A number can always be a leaf, so the number of ways it can form a branch should always start at 1. If the number can be made from multiple factor pairs, then ways is our starting value of 1 plus the sum of all the ways to make those factor pairs. For each existing factor pair (fA & fB), the number of ways to make that that particular pair configuration is the product of the number of ways to make fA and fB. So we can see that each number relies on first solving the same question for each of its factors. This means that we should start by sorting our numbers array (A). Then we can iterate through A and figure out each number in ascending order, so that we will have completed any factors for larger numbers before we need to use them. This means storing the information, which we can do in a map, so that we can look up the results by value. In order to be more efficient when we attempt to find each factor pair, we only need to iterate through A up to the square root of the number in question, so that we don't duplicate the same factor pairs going the opposite direction. That means we need to double every pair result where fA and fB are not the same. Since each number can be the head of a tree, our answer (ans) will be the sum of each number's result. We shouldn't forget to modulo at each round of summation. Implementation: Java and C++, having typed variables, should use long for ways and ans, but will need to cast ans back to int before returning. They will also need an extra continue conditional when checking for factors. Javascript Code: var numFactoredBinaryTrees = function(A) { A.sort((a,b) => a - b) let len = A.length, fmap = new Map(), ans = 0 for (let i = 0; i < len; i++) { let num = A[i], ways = 1, lim = Math.sqrt(num) for (let j = 0, fA = A[0]; fA <= lim; fA = A[++j]) { let fB = num / fA if (fmap.has(fB)) ways += fmap.get(fA) * fmap.get(fB) * (fA === fB ? 1 : 2) } fmap.set(num, ways), ans += ways } return ans % 1000000007 }; Python Code: class Solution: def numFactoredBinaryTrees(self, A: List[int]) -> int: A.sort() fmap, ans = defaultdict(), 0 for num in A: ways, lim = 1, sqrt(num) for fA in A: if fA > lim: break fB = num / fA if fB in fmap: ways += fmap[fA] * fmap[fB] * (1 if fA == fB else 2) fmap[num], ans = ways, (ans + ways) return ans % 1000000007 Java Code: class Solution { public int numFactoredBinaryTrees(int[] A) { Arrays.sort(A); int len = A.length; long ans = 0; HashMap<Integer, Long> fmap = new HashMap<>(); for (int num : A) { long ways = 1; double lim = Math.sqrt(num); for (int j = 0, fA = A[0]; fA <= lim; fA = A[++j]) { if (num % fA != 0) continue; int fB = num / fA; if (fmap.containsKey(fB)) ways += fmap.get(fA) * fmap.get(fB) * (fA == fB ? 1 : 2); } fmap.put(num, ways); ans = (ans + ways) % 1000000007; } return (int)ans; } } C++ Code: class Solution { public: int numFactoredBinaryTrees(vector<int>& A) { sort(A.begin(), A.end()); int len = A.size(); long ans = 0; unordered_map<int, long> fmap; for (int num : A) { long ways = 1; double lim = sqrt(num); for (int j = 0, fA = A[0]; fA <= lim; fA = A[++j]) { if (num % fA != 0) continue; int fB = num / fA; if (fmap.find(fB) != fmap.end()) ways += fmap[fA] * fmap[fB] * (fA == fB ? 1 : 2); } fmap[num] = ways; ans = (ans + ways) % 1000000007; } return (int)ans; } }; Discussion (2) thankyou for your consistent solutions No problem at all! I'm definitely benefiting from the practice, as well.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/seanpgallivan/solution-binary-trees-with-factors-2kk4
CC-MAIN-2021-21
en
refinedweb
We likely know Kafka as a durable, scalable and fault-tolerant publish-subscribe messaging system. Recently I got a requirement to efficiently monitor and manage our Kafka cluster, and I started looking for different solutions. Kafka-manager is an open source tool introduced by Yahoo to manage and monitor the Apache Kafka cluster via UI. As per their documentation on github below are the major features: - Manage multiple clusters. - Easy inspection of the cluster state. Brokers: - Run preferred replica election. - Generate partition assignments with the option to select brokers to use - Run reassignment of a partition (based on generated assignments) Topics: - Create a topic with optional topic configs (0.8.1.1 has different configs than 0.8.2+) - Delete topic (only supported on 0.8.2+ and remember set delete.topic.enable=true in broker config) - The topic list now indicates topics marked for deletion (only supported on 0.8.2+) - Batch generate partition assignments for multiple topics with the option to select brokers to use - Batch run reassignment of partition for multiple topics - Add partitions to an existing topic - Update config for an existing topic Metrics: - Optionally filter out consumers that do not have ids/ owners/ & offsets/ directories in zookeeper. - Optionally enable JMX polling for broker level and topic level metrics. Prerequisites of Kafka Manager: We should have a running Apache Kafka with Apache Zookeeper. - Apache Zookeeper - Apache Kafka Deployment on Kubernetes: After deployment, we should able to access Kafka manager service via We have two files to Kafka-manager-service.yaml and kafka-manager.yaml to achieve above-mentioned setup. Let’s have a brief description of the different attributes used in these files. Deployment configuration file: namespace: provide a namespace to isolate application within Kubernetes. replicas: number of containers to spun up. image: provide the path of docker image to be used. containerPorts: on which port you want to run your application. environment: “ZK_HOSTS” provide the address of already running zookeeper. Service configuration file: This file contains the details to create Kafka manager service ok Kubernetes. For demo purpose, I have used the node port method to expose my service. As we are using Kubernetes for our underlying platform of deployment it is recommended not to use external IP to access any service. Either we should go with LoadBalancer or use ingress (recommended method) rather than exposing all microservices. To configure ingress, please take a note from Kubernetes Ingress. Once we are able to access Kafka manager we can see similar screens. Cluster Management Topic List Major Issues To resolve this you need to update JMX settings while creating your docker image as given as below. vim /opt/kafka " KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=$HOSTNAME -Djava.net.preferIPv4Stack=true" fi 2 thoughts on “Kafka Manager On Kubernetes” give us your ingress configuration man. how did u do an ingress accept a TCP request? Hi @stan dummy configuration provided, you can update the values as per your use case. You can refer this as well for creating ingress
https://blog.opstree.com/2019/04/23/kafka-manager-on-kubernetes/?replytocom=937
CC-MAIN-2021-21
en
refinedweb
Hi Enthusiastic Learners! In this article we learn about many Universal Functions or Built-In functions of NumPy. Universal Functions plays a very crucial role in getting best performance out of NumPy and if you want to know advantages and effects on performance while using Universal Functions (UFuncs), you should go through our article Why use Universal Functions or Built in Functions in NumPy ? And to learn basics of NumPy you can go through these 2 detailed articles, they will help you get a good understanding of creating & traversing NumPy arrays. In this article we will be covering following topics: - Arithmetic Universal Functions - Trigonometric Universal Functions - Exponent & Logarithmic Universal Functions Arithmetic Universal Functions¶ The biggest advantage of using UFuncs for Arithmetic Operations is that they all look same as that of our standard Mathematical operators, that is, you can simply use ‘ +‘, ‘ -‘, ‘ /‘ & ‘ *‘ for their Mathematical meaningful operations – Addition, Subtraction, Division & Multiplication respectively. Let’s create an Array and try out these operations — just remember one thing when we are adding or subtracting a scalar value to or from an Array it will implement to all elements of that array. import numpy as np # Our Base Array x = np.arange(1, 20, 3) x array([ 1, 4, 7, 10, 13, 16, 19]) print("x + 4 = " + str(x + 4)) print("x - 4 = " + str(x - 4)) print("x / 4 = " + str(x / 4)) print("x * 4 = " + str(x * 4)) x + 4 = [ 5 8 11 14 17 20 23] x - 4 = [-3 0 3 6 9 12 15] x / 4 = [0.25 1. 1.75 2.5 3.25 4. 4.75] x * 4 = [ 4 16 28 40 52 64 76] Note: One interesting thing to note here is that NumPy automatically selects that which data-type to choose after any operation. In above example, when we divided integer values of array we got Float or Decimal values in output. Thus, it automatically chooses the higher data-set. You can also perform following operations: - Negate all values of an array - Finding modulus of all values of array (remainder of values) - Finding power of all numbers print("Negate all values of array") print ("-x \t= " + str(-x)) print("\nModulus of all numbers with 4") print("x % 4 \t= " +str(x % 4)) print("\nCalculating power of all number with 3") print("x ** 3 \t=" + str(x ** 3)) Negate all values of array -x = [ -1 -4 -7 -10 -13 -16 -19] Modulus of all numbers with 4 x % 4 = [1 0 3 2 1 0 3] Calculating power of all number with 3 x ** 3 =[ 1 64 343 1000 2197 4096 6859] Corresponding to above Mathematical operators we also have standard NumPy functions which are internally called whenever an operator is used. List of these functions is as follows: - ” +” np.add - ” -” np.subtract - ” /” np.divide - ” *” np.multiply - ” -val” np.negative - ” %” np. mod - ” * *” np.pow Trigonometric Universal Functions¶ We can could use trigonometric functions to find both standard results as well as inverse trigonometric results too. Let’s begin with creating an array of different angles. angle = np.arange(0,15, 4) angle array([ 0, 4, 8, 12]) print("tan(angle) = " + str(np.tan(angle))) print("\nsin(angle) = " + str(np.sin(angle))) print("\ncos(angle) = " + str(np.cos(angle))) tan(angle) = [ 0. 1.15782128 -6.79971146 -0.63585993] sin(angle) = [ 0. -0.7568025 0.98935825 -0.53657292] cos(angle) = [ 1. -0.65364362 -0.14550003 0.84385396] Let’s see Inverse Trigonometric Functions too. First create an array on which we will be applying inverse trigonometric functions to get corresponding angles. values = [0, 1, -1] values [0, 1, -1] print("arctan(values) = " + str(np.arctan(values))) print("\narcsin(values) = " + str(np.arcsin(values))) print("\narccos(values) = " + str(np.arccos(values))) arctan(values) = [ 0. 0.78539816 -0.78539816] arcsin(values) = [ 0. 1.57079633 -1.57079633] arccos(values) = [1.57079633 0. 3.14159265] NumPy also provides us with a set of Hyperbolic Trigonometric functions. Here is an example for them. print("tanh(angle) = " + str(np.tanh(angle))) print("\nsinh(angle) = " + str(np.sinh(angle))) print("\ncosh(angle) = " + str(np.cosh(angle))) tanh(angle) = [0. 0.9993293 0.99999977 1. ] sinh(angle) = [0.00000000e+00 2.72899172e+01 1.49047883e+03 8.13773957e+04] cosh(angle) = [1.00000000e+00 2.73082328e+01 1.49047916e+03 8.13773957e+04] One thing to note here is that for getting Hyperbolic functions all you had to do was add an ‘h’ at end of each function. So, its very easy to remember. And similarly, you can check Inverse Hyperbolic Functions — add ‘arc’ before function name and ‘h’ at the end. That is, arctanh(), arcsinh() and arccosh(). It make things easy to remember. Exponent & Logarithmic Universal Functions¶ Following is the list of Exponential Functions - exp(x) — e^x - expm1(x) — e^x — Used when ‘x’ is very small, it provides more accuracy in comparison to exp(), however it is a little bit slower. So, try using it only when you have very small values. - exp2() — 2^x — Used only when calculating power of scalar value ‘2’ - power(n,x) — n^x — Any number raise to power ‘x’ Let’s see them in action. # base array x = np.arange(1, 8, 2) x array([1, 3, 5, 7]) print("-- e^x --") print(np.exp(x)) print("\n-- 2^x --") print(np.exp2(x)) print("\n-- 5^x --") print(np.power(5, x)) -- e^x -- [ 2.71828183 20.08553692 148.4131591 1096.63315843] -- 2^x -- [ 2. 8. 32. 128.] -- 5^x -- [ 5 125 3125 78125] x_small = np.array([0.01, 0.001, 0.0001, 0.00001]) x_small array([1.e-02, 1.e-03, 1.e-04, 1.e-05]) print("EXP() -- Standard Function") print(np.exp(x_small)) print("\nEXPM1() -- High Precision") print(np.expm1(x_small)) EXP() -- Standard Function [1.01005017 1.0010005 1.00010001 1.00001 ] EXPM1() -- High Precision [1.00501671e-02 1.00050017e-03 1.00005000e-04 1.00000500e-05] As you can see we get more precision while using expm1(). Following is the list of Logarithmic Functions - log(x) — Natural Log - log1p(x) — Natural Log with high precision. Use it when value of ‘x’ is very small. - log2(x) — Log with Base 2 - log10(x) — Log with Base 10 Let’s see how they work. # base array x = np.arange(1, 8, 2) x array([1, 3, 5, 7]) print("-- log(x) --") print(np.log(x)) print("\n-- log2(x) --") print(np.log2(x)) print("\n-- log10(x) --") print(np.log10(x)) -- log(x) -- [0. 1.09861229 1.60943791 1.94591015] -- log2(x) -- [0. 1.5849625 2.32192809 2.80735492] -- log10(x) -- [0. 0.47712125 0.69897 0.84509804] print("LOG() -- Standard Log Function") print(np.exp(x_small)) print("\nLOG() -- High Precision") print(np.expm1(x_small)) LOG() -- Standard Log Function [1.01005017 1.0010005 1.00010001 1.00001 ] LOG() -- High Precision [1.00501671e-02 1.00050017e-03 1.00005000e-04 1.00000500e-05] From results it is clear that for very small number we get higher precision when we use logp() function. In our next tutorial we will be learning Aggregation Functions in depth, as we have covered only very few over here. There are a lot more functions that we need to explore yet. So stay tuned & Keep Learning!! And don’t forget to check our YouTube Channel ML For Analytics. You can also follow us on Facebook!! 2 thoughts on “Universal Functions in NumPy”
https://mlforanalytics.com/2020/04/02/numpys-universal-functions-built-in-functions/
CC-MAIN-2021-21
en
refinedweb
Stein Series Release Notes¶ 1.0.1¶ New Features¶ Added a new driver for handling network policy support as introduced in this blueprint . In order to enable it the following configuration must be followed: [kubernetes] enabled_handlers=vif,lb,lbaasspec,namespace,pod_label,policy,kuryrnetpolicy pod_subnets_driver=namespace pod_security_groups_driver=policy 1.0.0¶ New Features¶ Added possibility to ensure all OpenStack resources created by Kuryr are tagged. In case of Neutron regular tagsfield is used. If Octavia supports tagging (from Octavia API 2.5, i.e. Stein), tagsfield is used as well, otherwise tags are put on descriptionfield. All this is controlled by [neutron_defaults]resource_tagsconfig option that can hold a list of tags to be put on resources. This feature is useful to correctly identify any leftovers in OpenStack after K8s cluster Kuryr was serving gets deleted. It is now possible to use same pool_driver for different pod_vif_drivers when using MultiVIFPool driver. A new config option vif_pool.vif_pool_mapping is introduced which is a dict/mapping from pod_vif_driver => pool_driver. So different pod_vif_drivers can be configured to use the same pool_driver. [vif_pool] vif_pool_mapping=nested-vlan:nested,neutron-vif:neutron Earlier each instance of a pool_driver was mapped to a single pod_driver, thus requiring a unique pool_driver for each pod_vif_driver. Upgrade Notes¶ As announced, possiblity of running Kuryr-Kubernetes without kuryr-daemon service is now removed from the project and considered not supported. If vif_pool.pools_vif_driversconfig option is used, new config option vif_pool.vif_pool_mapping should be populated with inverted mapping from the present value of vif_pool.pools_vif_drivers. Deprecation Notes¶ Configuration option vif_pool.pools_vif_drivershas been deprecated in favour of vif_pool.vif_pool_mappingto allow reuse of pool_drivers for different pod_vif_drivers. If vif_pool_mappingis not configured, pools_vif_driverswill still continue to work for now, but pools_vif_driverswill be completely removed in a future release. 0.6.0¶ New Features¶ Added support for using cri-o (and podman & buildah) as container engine in both container images and DevStack. Upgrade Notes¶ Before upgrading to T (0.7.x) run kuryr-k8s-status upgrade checkto check if upgrade is possible. In case of negative result refer to kuryr-kubernetes documentation for mitigation steps.
https://docs.openstack.org/releasenotes/kuryr-kubernetes/stein.html
CC-MAIN-2021-21
en
refinedweb
Linux 4.4 From: Linus Torvalds Date: Sun Jan 10 2016 - 18:25:52 EST ] Nothing untoward happened this week, so Linux-4.4 is out in all the usual places. The changes since rc8 aren't big. There's about one third arch updates, one third drivers, and one third "misc" (mainly some core kernel and networking), But it's all small. Notable might be unbreaking the x86-32 "sysenter" ABI, when somebody (*cough*android-x86*cough*) misused it by not using the vdso and instead using the instruction directly. Full shortlog appended for people who care or are just curious. And with this, the merge window for 4.5 is obviously open, even if I won't start actually pulling until tomorrow. Linus --- Alan Cox (1): mkiss: fix scribble on freed memory Andrea Arcangeli (1): firmware: dmi_scan: Fix UUID endianness for SMBIOS >= 2.6 Andrey Ryabinin (1): sched/fair: Fix multiplication overflow on 32-bit systems Andy Lutomirski (2): x86/entry: Fix some comments x86/entry: Restore traditional SYSENTER calling convention Arnaldo Carvalho de Melo (2): perf list: Add support for PERF_COUNT_SW_BPF_OUT perf list: Robustify event printing routine Ashok Raj (1): x86/mce: Ensure offline CPUs don't participate in rendezvous process Ashutosh Dixit (1): dmaengine: Revert "dmaengine: mic_x100: add missing spin_unlock" Bard Liao (1): ASoC: rt5645: add sys clk detection Ben Skeggs (1): drm/nouveau/gr/nv40: fix oops in interrupt handler Boris Ostrovsky (1): x86/xen: Avoid fast syscall path for Xen PV guests Brian Norris (3): mtd: fix cmdlinepart parser, early naming for auto-filled MTD mtd: spi-nor: fix Spansion regressions (aliased with Winbond) mtd: spi-nor: fix stm_is_locked_sr() parameters Charles Keepax (1): ASoC: Use nested lock for snd_soc_dapm_mutex_lock Chris Metcalf (1): tile: provide CONFIG_PAGE_SIZE_64KB etc for tilepro Colin Ian King (1): ftrace/scripts: Fix incorrect use of sprintf in recordmcount Daniel J Blueman (1): x86/numachip: Fix NumaConnect2 MMCFG PCI access David Ahern (1): net: Propagate lookup failure in l3mdev_get_saddr to caller David Vrabel (1): x86/paravirt: Prevent rtc_cmos platform device init on PV guests Florian Westphal (1): connector: bump skb->users before callback invocation Francesco Ruggeri (1): net: possible use after free in dst_release Geert Uytterhoeven (1): iommu/ipmmu-vmsa: Don't truncate ttbr if LPAE is not enabled Hannes Frederic Sowa (1): bridge: Only call /sbin/bridge-stp for the initial network namespace Hui Wang (1): ALSA: hda - Add keycode map for alc input device Insu Yun (2): qlcnic: correctly handle qlcnic_alloc_mbx_args cxgb4: correctly handling failed allocation Jens Axboe (1): Revert "block: Split bios on chunk boundaries" John Fastabend (1): net: sched: fix missing free per cpu on qstats Kailang (1): ALSA: hda - Add mic mute hotkey quirk for Lenovo ThinkCentre AIO Kees Cook (1): ACPI / property: avoid leaking format string into kobject name Kristian Evensen (1): net: qmi_wwan: Add WeTelecom-WPD600N Linus Torvalds (1): Linux 4.4 Linus Walleij (2): ARM: nomadik: set latencies to 8 cycles ARM: versatile: fix MMC/SD interrupt assignment Martin K. Petersen (1): sd: Reject optimal transfer length smaller than page size Michael Petlan (2): perf buildid-list: Show running kernel build id fix perf buildid-list: Fix return value of perf buildid-list -k Michal Hocko (1): vmstat: allocate vmstat_wq before it is used NeilBrown (1): async_tx: use GFP_NOWAIT rather than GFP_IO Nikesh Oswal (1): ASoC: arizona: Fix bclk for sample rates that are multiple of 4kHz One Thousand Gnomes (1): 6pack: fix free memory scribbles Paolo Bonzini (1): kvm: x86: only channel 0 of the i8254 is linked to the HPET Peter Zijlstra (3): perf: Fix race in perf_event_exec() perf: Fix race in swevent hash sched/core: Fix unserialized r-m-w scribbling stuff Qiu Peiyang (1): tracing: Fix setting of start_index in find_next() Rabin Vincent (2): net: filter: make JITs zero A for SKF_AD_ALU_XOR_X ARM: net: bpf: fix zero right shift Rainer Weikusat (1): af_unix: Fix splice-bind deadlock Rameshwar Prasad Sahu (1): dmaengine: xgene-dma: Fix double IRQ issue by setting IRQ_DISABLE_UNLAZY flag Richard Cochran (1): PCI: dra7xx: Mark driver as broken Robin Murphy (3): iommu/dma: Add some missing #includes iommu/dma: Avoid unlikely high-order allocations iommu/dma: Use correct offset in map_sg Roman Volkov (1): dts: vt8500: Add SDHC node to DTS file for WM8650 Sebastian Andrzej Siewior (1): sched/core: Reset task's lockless wake-queues on fork() Sergey Senozhatsky (1): sched/core: Check tgid in is_global_init() Shrikrishna Khare (1): Driver: Vmxnet3: Fix regression caused by 5738a09 Steven Rostedt (Red Hat) (1): ftrace/module: Call clean up function when module init fails early Thomas Gleixner (1): genirq: Prevent chip buslock deadlock Timo Sigurdsson (2): ARM: Fix broken USB support in sunxi_defconfig ARM: Fix broken USB support in multi_v7_defconfig for sunxi devices Tony Lindgren (1): ARM: OMAP2+: Fix onenand rate detection to avoid filesystem corruption Vinod Koul (2): ASoC: Intel: Skylake: Revert previous broken fix memory leak fix ASoC: Intel: Skylake: Fix the memory leak Wang Nan (3): perf hists browser: Add NULL pointer check to prevent crash perf hists browser: Reset selection when refresh perf hists browser: Fix segfault if use symbol filter in cmdline Yuchung Cheng (1): tcp: fix zero cwnd in tcp_cwnd_reduction hayeswang (1): r8152: add reset_resume function ]
http://lkml.iu.edu/hypermail/linux/kernel/1601.1/01592.html
CC-MAIN-2019-09
en
refinedweb
import "nsIEventTarget.idl"; Dispatch an event to this event target. This function may be called from any thread, and it may be called re-entrantly. Check to see if this event target is associated with the current thread. This flag specifies the default mode of event dispatch, whereby the event is simply queued for later processing. When this flag is specified, dispatch returns immediately after the event is queued..
http://doxygen.db48x.net/comm-central/html/interfacensIEventTarget.html
CC-MAIN-2019-09
en
refinedweb
Is there is any functionality for restricting custom object creation in SAP, similar to how standard object require an Access Key to be generated? If you get a namespace registered to you, others cannot change the objects in that namespace without a key. What is it you are trying to achieve with your restriction? You already have an active moderator alert for this content. In our system, the user must be set up as "DEVELOPER" in the Group field of the Logon Data tab of transaction SU01 in order the create or change objects in the customer name space. The user must also be registered as a developer in OSS. Add comment
https://answers.sap.com/questions/137192/how-to-limit-creation-of-custom-objects.html
CC-MAIN-2019-09
en
refinedweb
Using objects that implement IDisposable The common language runtime's garbage collector reclaims the memory used by managed objects, but types that use unmanaged resources implement the IDisposable interface to allow the memory allocated to these unmanaged resources to be reclaimed. When you finish using an object that implements IDisposable, you should call the object's IDisposable.Dispose implementation. You can do this in one of two ways: With the C# usingstatement or the Visual Basic Usingstatement. By implementing a try/finallyblock. The using statement The using statement in C# and the Using statement in Visual Basic simplify the code that you must write to create and clean up; using System.IO; public class Example { public static void Main() { Char[] buffer = new Char[50]; using (StreamReader s = new StreamReader("File1.txt")) { int charsRead = 0; while (s.Peek() != -1) { charsRead = s.Read(buffer, 0, buffer.Length); // // Process characters read. // } } } } Imports System.IO Module Example Public Sub Main() Dim buffer(49) As Char Using s As New StreamReader("File1.txt") Dim charsRead As Integer Do While s.Peek() <> -1 charsRead = s.Read(buffer, 0, buffer.Length) ' ' Process characters read. ' Loop End Using End Sub End Module Note that; using System.IO; public class Example { public static void Main() { Char[] buffer = new Char[50]; { StreamReader s = new StreamReader("File1.txt"); try { int charsRead = 0; while (s.Peek() != -1) { charsRead = s.Read(buffer, 0, buffer.Length); // // Process characters read. // } } finally { if (s != null) ((IDisposable)s).Dispose(); } } } } Imports System.IO Module Example Public Sub Main() Dim buffer(49) As Char '' Dim s As New StreamReader("File1.txt") With s As New StreamReader("File1.txt") Try Dim charsRead As Integer Do While s.Peek() <> -1 charsRead = s.Read(buffer, 0, buffer.Length) ' ' Process characters read. ' Loop Finally If s IsNot Nothing Then DirectCast(s, IDisposable).Dispose() End Try End With End Sub End Module(. This may be your personal coding style, or you might want to do this for one of the following reasons: To include a catchblock to handle any exceptions thrown in the tryblock. Otherwise, any exceptions thrown by the usingstatement are unhandled, as are any exceptions thrown within the usingblock if a try/catchblock isn't present..(); } } } Imports System.Globalization Imports System.IO Module Example Public Sub Main() Dim sr As StreamReader = Nothing Try sr = New StreamReader("file1.txt") Dim contents As String = sr.ReadToEnd() Console.WriteLine("The file has {0} text elements.", New StringInfo(contents).LengthInTextElements) sr IsNot Nothing Then sr.Dispose() End Try End Sub End Module You can follow this basic pattern if you choose to implement or must implement a try/finally block, because your programming language doesn't support a using statement but does allow direct calls to the Dispose method. See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/using-objects
CC-MAIN-2019-09
en
refinedweb
Blogging on App Engine Interlude: Editing and listing Posted by Nick Johnson | Filed under coding, app-engine, tech, bloggart This is part of a series of articles on writing a blogging system on App Engine. An overview of what we're building is here. A couple of things didn't quite make it into part 2 of the series: Listing and editing posts in the admin interface. This post is a short 'interlude' between the main posts in the series, and briefly covers the changes needed for those features. Editing posts requires surprisingly little work, thanks to our use of the Django forms library. First, we write a decorator function that we can attach to methods that require an optional post ID, loading the relevant BlogPost object for us: def with_post(fun): def decorate(self, post_id=None): post = None if post_id: post = BlogPost.get_by_id(int(post_id)) if not post: self.error(404) return fun(self, post) return decorate Then, we enhance the PostHandler to take an optional post ID argument, using the decorator we just defined. Here's the new get() method: @with_post def get(self, post): self.render_form(PostForm(instance=post)) If no post ID is supplied, post is None, and the form works as it used to. If a post ID is supplied, the post variable contains the post to be edited, and the form pre-fills all the relevant information. The same applies to the post() method. Now all we have to do is add an additional entry for the PostHandler in the webapp mapping: ('/admin/post/(\d+)', PostHandler), Listing posts is extremely simple: First, we refactor the 'render_to_response' method into a BaseHandler, as we suggested in part 2. Then, we create a new AdminHandler. This handler simply fetches a set of posts from the datastore, ordered by publication date, and renders a template listing them. Here's the full code for the AdminHandler, most of which is concerned with providing the Django templates with the correct offsets to use for generating next and previous links and the post count: class AdminHandler(BaseHandler): def get(self): offset = int(self.request.get('start', 0)) count = int(self.request.get('count', 20)) posts = BlogPost.all().order('-published').fetch(count, offset) template_vals = { 'offset': offset, 'count': count, 'last_post': offset + len(posts) - 1, 'prev_offset': max(0, offset - count), 'next_offset': offset + count, 'posts': posts, } self.render_to_response("index.html", template_vals) Finally, Sylvain, from the #appengine IRC channel, pointed out that the blog as it stands doesn't handle unicode gracefully. Fortunately, fixing that is simple - instead of setting the mime type for generated pages to "text/html", we set it to "text/html; charset=utf-8". This simple change is all that's required. You can see the diff for that, along with internationalization improvements to the slugify function, here.Previous Post Next Post
http://blog.notdot.net/2009/10/Blogging-on-App-Engine-Interlude-Editing-and-listing
CC-MAIN-2019-09
en
refinedweb
Return binary media from a Lambda proxy integration To return binary media from an AWS Lambda proxy integration, base64 encode the response from your Lambda function. You must also configure your API's binary media types. To use a web browser to invoke an API with this example integration, set your API's binary media types to */*. API Gateway uses the first Accept header from clients to determine if a response should return binary media. To return binary media when you can't control the order of Accept header values, such as requests from a browser, set your API's binary media types to */* (for all content types). The following example Python 3 Lambda function can return a binary image from Amazon S3 or text to clients. The function's response includes a Content-Type header to indicate to the client the type of data that it returns. The function conditionally sets the isBase64Encoded property in its response, depending on the type of data that it returns. import base64 import boto3 import json import random s3 = boto3.client('s3') def lambda_handler(event, context): number = random.randint(0,1) if number == 1: response = s3.get_object( Bucket=' bucket-name', Key=' image.png', ) image = response['Body'].read() return { 'headers': { "Content-Type": "image/png" }, 'statusCode': 200, 'body': base64.b64encode(image).decode('utf-8'), 'isBase64Encoded': True } else: return { 'headers': { "Content-type": "text/html" }, 'statusCode': 200, 'body': "<h1>This is text</h1>", } To learn more about binary media types, see Working with binary media types for REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/lambda-proxy-binary-media.html
CC-MAIN-2022-05
en
refinedweb
MongoDB + {{item.actionAppName}}{{item.message}} It's easy to connect MongoDB + SMS By Connect. Send SMS on specified number(s). (30 seconds) (10 seconds) (30 seconds) (10 seconds) (2 minutes) MongoDB is a document database that helps developers to store data in flexible, scalable way. Its database system is open-source and compatible with various platforms including Windows, Linux, Mac, Sparis, etc. Its high performance allows storing huge amount of data within short time. MongoDB also provides high availability option to get access to the database by using replica set or sharded cluster. We will use MongoDB database to store data related to our business for which we have developed SMS By Connect application. This data could be stored in the form of array, document or key-value pair. We can also create index on MongoDB cplection for faster retrieval by specifying index name, field name and the operator to compare the field value. SMS By Connect is an app available for Android users that enables you to send messages directly from your computer via internet connection. You do not need to pay for sending messages if you are connected to internet at home or office. At present, SMS By Connect supports sending free text messages to the fplowing countries. Australia, Canada, China, France, Germany, India, Norway, Pakistan, Philippines, Ppand, South Africa, United Kingdom and United States. The current version of the software allows you to send SMS via mobile carrier network but you can’t send MMS as it requires special MMSC gateway to send MMS. In this article we will integrate MongoDB and SMS By Connect as shown in Figure 1. Figure 1. Integration of MongoDB and SMS By Connect This integration will enable us to store SMS sent from our computer onto the database and retrieve it again after a certain time interval. Once we have cplected a significant amount of data with a reasonable time interval we can analyze this data for marketing purposes such as finding out how many people from which countries have been sending messages per day and how many messages have been sent from particular country. We can also find out which countries have been sending most messages per month or day. In addition to this we can also find out which days of the week have been popular for sending messages and so on. These statistics could be helpful for our business as we will be able to know the interest of our customers and accordingly we can plan strategies to bring more customers by offering attractive deals or discount on our products or services. Figure 2. Creating a new application as “SMS By Connect” package name dependencies { compile 'org.mongodb:mongo-java-driver:2.11.0' } Figure 3. Adding dependency for mongo-java-driver into build.gradle file of “SMS By Connect” project <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-sdk android: <uses-permission android: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest> package net.learn2develop.SmsByConnect; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.widget.TextView; import com.mongodb.*; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState. { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); String dbConnStr = getApplicationContext(.getString(R.string.dbConnStr); ClassFactory cf = ClassFactory .newInstance(MongoClientsFactory .class); MongoClient client = cf .newMongoClient(dbConnStr); driver = client .getDriver(); TextView tv = (TextView)findViewById(R.id .tv1 ); } } private void createDatabase(. { String dbPath = "; try { dbPath = Environment .getExternalStorageDirectory(.getAbsputePath(. + "/SMSByConnect"; } catch (Exception e. { Logger .e(TAG , "Error getting path for database" + e); } try { Cplection<String> cpl = client .getCplection("messages"); cpl .ensureIndex("sentDate", new Date()); cpl .insertMany([ "sentDate", "phoneNo", "phoneNo2", "phoneNo3", "phoneNo4", "phoneNo5", "phoneNo6", "phoneNo7", "phoneNo8", "phoneNo9", "phoneNo10", "phoneNo11", "phoneNo12", "phoneNo13", "phoneNo14", "phoneNo15", "phoneNo16", "phoneNo17", "phoneNo18", "phoneNo19", "phoneNo20", "phoneNo21", "phoneNo22", "phoneNo23", "phoneNo24", "phoneNo25", "phoneNo26", "phoneNo27", "phoneNo28", "phoneNo29", "sentDate"]); client .close(); } catch (Exception e. { Logger .e(TAG , "Error creating database cplection" + e); } } Figure 4. Running the app in emulator or physical device after integrating MongoDB and SMS By Connect together with user input fields for entering phone number, phone number 2, phone number 3, phone number 4 etcetera along with time interval in minutes between each message being sent out from the computer to the emulator/device The process to integrate 403 Forbidden and 403 Forbidden may seem complicated and intimidating. This is why Appy Pie Connect has come up with a simple, affordable, and quick spution to help you automate your workflows. Click on the button below to begin.
https://www.appypie.com/connect/apps/mongodb/integrations/sms-by-connect
CC-MAIN-2022-05
en
refinedweb
Investors in Avalara Inc (Symbol: AVLR) saw new options become available this week, for the October 18th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the AVLR options chain for the new October 18th contracts and identified one put and one call contract of particular interest. The put contract at the $65.00 strike price has a current bid of 85 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $65.00, but will also collect the premium, putting the cost basis of the shares at $64.15 (before broker commissions). To an investor already interested in purchasing shares of AVLR, that could represent an attractive alternative to paying $77.62/share today. Because the $65.31% return on the cash commitment, or 12.24% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Avalara Inc, and highlighting in green where the $65.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $80.00 strike price has a current bid of $3.70. If an investor was to purchase shares of AVLR stock at the current price level of $77.62/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $80.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 7.83% if the stock gets called away at the October 18th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if AVLR shares really soar, which is why looking at the trailing twelve month trading history for Avalara Inc, as well as studying the business fundamentals becomes important. Below is a chart showing AVLR's trailing twelve month trading history, with the $80.00 strike highlighted in red: Considering the fact that the $80.77% boost of extra return to the investor, or 44.61% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 55%, while the implied volatility in the call contract example is 52%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 250 trading day closing values as well as today's price of $77.62) to be 50%..
https://www.nasdaq.com/articles/first-week-of-october-18th-options-trading-for-avalara-avlr-2019-09-09
CC-MAIN-2022-05
en
refinedweb
Hi. I have a model and an optimizer, to which I apply amp from apex package from apex import amp model= ... optimizer= ... model, optimizer = amp.initialize(model, optimizer, opt_level='O1') After the model have been wrapped in amp, I would like to access one of its weights and change it. For example: model.conv1.weight.data = new_tensor The point is when I do this, it has no effect. It looks like amp keeps a different copy of the weights, and when updating the weight on the fly, there is no effect. Is there any possibility to update the weights on the flight after my model has been wrapped by amp? Thanks
https://discuss.pytorch.org/t/change-weights-of-a-model-wrapped-in-amp/102295/3
CC-MAIN-2022-05
en
refinedweb
The TestProject OpenSDK allows you to execute your Selenium and Appium tests using the power of the TestProject platform. This means you’ll benefit from automatic test reporting in HTML and PDF, automatic updates and configuration of your Selenium browser drivers, collaborative reporting dashboards, and much more, all using a single, free and open-source SDK. Recently, a new member of the OpenSDK family has been released: the TestProject C# OpenSDK 🎉 In this article, we’ll take a look at how to get started with this brand new C# SDK. We’ll first see how we can convert an existing Selenium-based test to a TestProject-powered test, and then we will explore some of the reporting features of the SDK. 💥 Watch this live hands-on webinar recording to get started with the TestProject C# OpenSDK! 💥 Table of Contents - Installing the TestProject C# OpenSDK - Configuring your TestProject Agent and Developer Token - Creating and running your first test - Inspecting the test reports in the cloud - SpecFlow support - Summary - Hands-on webinar recording + full presentation slides attached! Installing the TestProject C# OpenSDK The TestProject C# OpenSDK is 100% free and open-source and is available as a NuGet package. When you add this package to your C# project, either through the NuGet package manager in Visual Studio or by using the command line, the SDK, as well as its dependencies, will be added to your project or solution. Configuring your TestProject Agent and Developer Token As with the other TestProject OpenSDKs (Python and Java), tests are run using the TestProject Agent, which takes care of browser driver detection, installation and configuration and sends reports to TestProject Cloud. The TestProject Agent can be downloaded from here. By default, the SDK will communicate with the Agent on its default address, which is. If you’re running the Agent on another port, or even an entirely different machine, you can configure the correct address by setting the TP_AGENT_URL environment variable to the correct address. After installing and registering the Agent (watch this great quick video to see it in action and learn even more about the Agent’s power), you will also need to generate and configure a developer token. You can get this token from and either set it in the TP_DEV_TOKEN environment variable, or pass it explicitly in your test code when you create a new driver. Creating and running your first test Now that all setup work is done, let’s take the C# OpenSDK for a spin 🤸♂️ Here is an example test, using Selenium and the NUnit unit testing framework, that opens the TestProject demo application, logs in, and checks that a greeting message is displayed: namespace TestProject.OpenSDK.Examples.NUnit { using NUnit.Framework; using OpenQA.Selenium; using TestProject.OpenSDK.Drivers.Web; [TestFixture] public class ExampleTest { private ChromeDriver driver; [SetUp] public void StartBrowser() { this.driver = new ChromeDriver(); } [Test] public void ExampleTestUsingChromeDriver() { this.driver.Navigate().GoToUrl(""); this.driver.FindElement(By.CssSelector("#name")).SendKeys("John Smith"); this.driver.FindElement(By.CssSelector("#password")).SendKeys("12345"); this.driver.FindElement(By.CssSelector("#login")).Click(); Assert.IsTrue(this.driver.FindElement(By.CssSelector("#greetings")).Displayed); } [TearDown] public void CloseBrowser() { this.driver.Quit(); } } } 💡 For those of you that are familiar with developing Selenium-based tests in C#, this code will probably be easy to understand. The most important thing is that we use the ChromeDriver class from the TestProject OpenSDK, rather than the same class from Selenium itself. This is the only thing you need to change in your existing tests to start using the power of the TestProject platform. Inspecting the test reports Once your TestProject Agent and the corresponding development token have been configured, the Agent will automatically report your test results to the TestProject cloud. After running the above test, if you go to the Reports tab, you’ll see a project called ‘NUnit’: 💡 Note: The C# OpenSDK supports automatically inferring project and job names for all major C# unit testing frameworks, i.e., MSTest, NUnit and xUnit.NET. The project name, unless explicitly specified, is the last section of the namespace that the current test is in (hence in this case ‘NUnit’). The automatically inferred job name is the name of the class that the test method is in, in this case, ‘ExampleTest’. If you open this ‘NUnit’ project, you’ll see the ‘ExampleTest’ job, corresponding test executions, and the tests that have been run as part of these: As you can see, adding the OpenSDK to our C# Selenium-based test and running it through the TestProject Agent produced a detailed report containing all the driver commands that were executed during this test. Like the other OpenSDKs, the TestProject C# OpenSDK offers a variety of options to customize your reporting. These options have been described in an article covering the Python OpenSDK. All options and settings described in that article are available in the C# OpenSDK, too. SpecFlow support An incredibly exciting feature of the TestProject C# OpenSDK is that it does not only support the major C# unit testing frameworks MSTest, NUnit, and xUnit.NET, but that it offers cloud reporting of SpecFlow-based tests, too. What exactly this entails, you will read in an upcoming article, coming soon! 🔜😉 Summary As we’ve seen in this tutorial, with just one unified SDK (that’s also available in Java and Python, by the way), developers and testers receive a go-to toolset, solving some of the greatest challenges in open-source test automation. Using this SDK will save you a lot of time, with the following benefits that come out of the box: * Open source and available as a Maven dependency / a NuGet package / a PyPI project. * 5-minute simple Selenium and Appium setup with a single Agent deployment. * Automatic test reports in HTML/PDF format (including screenshots and customization capabilities). * Collaborative reporting dashboards with execution history and RESTful API support. * Always up-to-date with the latest and stable Selenium driver version. * A simplified, familiar syntax for both web and mobile applications. * Complete test runner capabilities for both local and remote executions, anywhere. * Cross-platform support for Mac, Windows, Linux, and Docker. * Ability to store and execute tests locally on any source control tool, such as Git. For those of you who prefer watching tutorials – You should definitely watch this hands-on webinar recording hosted by Bas Dijkstra and Matthias Rapp ✨ 👉 Get the Presentation Slides 👈 I liked this OpenSDK very much, easier to build the tests integrated with the TestProject Agent. I still have a question that I couldn’t find here: what if I want to build using Test Project’s TDD with data coming from csv file for example? In the previous SDK you have, for example: [ParameterAttribute(DefaultValue = “NotFound”, Direction = ParameterDirection.Input)] public string machineName; But I downloaded the new SDK and couldn’t find this. Is there any topic related to this in the new SDK? Thanks in advance! Luiz Waldrich I was reading and I found my answer =)
https://blog.testproject.io/2021/01/27/getting-started-with-testproject-csharp-opensdk/
CC-MAIN-2022-05
en
refinedweb
In this post we are going to take a look at Jib, a tool from Google in order to create Docker images in an easy and fast way. No need to create a Docker file, no need to install a Docker daemon, Jib just runs out-of-the-box. 1. Introduction Up till now, we have been using the dockerfile-maven-plugin from Spotify in order to build and push our Docker images. This requires us to write a Docker file, according to best practices, to install a Docker daemon and to add the plugin to our build process. Jib will provide us a more easy way to create our Docker images. We only need to add and configure the Maven plugin and that is about it. Of course, we only believe this when we have tried it ourselves, and that is exactly what we are going to do. We will create a simple Spring Boot application, containerize it with Jib Maven plugin and push it to Docker Hub. Next, we will pull the image and run the Docker container. The sources are available at GitHub. We are using: - Ubuntu 18.04 - Spring Boot 2.2.1 - Java 11 - Jib Maven Plugin 1.8.0 - An account at Docker Hub More information about Jib can be found at the Google Cloud Platform Blog and at GitHub. 2. Create the Application First, we will create a simple Spring Boot application. We add the Spring Actuator and Spring Web MVC dependencies to our pom. Spring Actuator will provide us the means to add health checks. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> Our application consists out of a Rest controller which returns a hello message and the address of the machine. @RestController public class HelloController { @RequestMapping("/hello") public String hello() { StringBuilder message = new StringBuilder("Hello Jib Maven Plugin!"); try { InetAddress ip = InetAddress.getLocalHost(); message.append(" From host: " + ip); } catch (UnknownHostException e) { e.printStackTrace(); } return message.toString(); } } Run the application locally: $ mvn spring-boot:run After successful startup, we invoke the URL which returns us the following message: Hello Jib Maven Plugin! From host: gunter-Latitude-5590/127.0.1.1 3. Setup Jib and Docker Hub In this section, we will add the Jib Maven plugin and ensure that we have a successful connection to Docker Hub Registry. It has been quite of a struggle in order to get this working properly, mainly due to a lack of documentation. The official Jib documentation is quite vague about secure authentication methods. Most of the examples consist of adding plain text credentials to the pom or to the Maven settings file. But that is not what we want. We want a secure way of connecting to Docker Hub by means of a Docker Credential Helper. In order to test the connection, we add the Jib Maven plugin to our pom and configure it in order to retrieve a base image and to push this image to Docker Hub. <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>1.8.0</version> <configuration> <!-- openjdk:11.0.5-jre --> <from>  </from> <to>  <credHelper>pass</credHelper> </to> </configuration> </plugin> The from tag contains our base image, just like the FROM statement in a Docker file. The to tag contains the image we want to push. The ${docker.image.prefix} is set to mydeveloperplanet (our Docker Hub account), you will need to change this to your own account. The ${project.artifactId} contains the project artifact myjibplanet. In order to make use of a Credential Helper, we set the tag credHelper to pass. Before starting, we need to set up a GPG key if you do not already have one, see also the Ubuntu help pages. $ gpg --gen-key For ease of use, you can add the generated key to your profile as an environment variable. Add the following line to your .profile where you replace Your_GPG_Key with your key. export GPGKEY=Your_GPG_Key Source your .profile in order to make the environment variable available. $ source .profile You can also choose to send your key to the Ubuntu keyserver, but it is not necessary in order to execute the next steps. $ gpg --send-keys --keyserver keyserver.ubuntu.com $GPGKEY Install pass and initialize a password store with your GPG key. $ sudo apt install pass $ pass init Your_GPG_Key mkdir: created directory '/home/gunter/.password-store/' Password store initialized for My Password Storage Key Next thing to do, is to download and unpack the Docker Credential Helper and make the file executable. $ wget $ tar xvzf docker-credential-pass-v0.6.3-amd64.tar.gz $ mv docker-credential-pass /usr/bin $ chmod +x docker-credential-pass The Docker Credential Helper needs to be configured correctly and this is where documentation falls short. Create a config.json file with the following content. In the documentation it is stated to add the contents { "credStore": "pass" }, but with this configuration, Jib will not be able to connect to the Docker Hub Registry. We found the following issue where the use of credStore is not supported anymore for the Google Cloud Registry. "credHelpers": { "": "pass" } Initialize the Docker Credential Helper. Enter the password pass is initialized when being asked for a password. $ pass insert docker-credential-helpers/docker-pass-initialized-check mkdir: created directory '/home/gunter/.password-store/docker-credential-helpers' Enter password for docker-credential-helpers/docker-pass-initialized-check: Retype password for docker-credential-helpers/docker-pass-initialized-check: Check whether the password is correctly set: $ pass show docker-credential-helpers/docker-pass-initialized-check pass is initialized Login with your Docker credentials. A warning is raised saying that your password is stored unencrypted in the file config.json. We could not figure out why this warning is being raised, because the credentials are stored encrypted in the config.json file. $ docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to to create one. Username: your_user_name Password: WARNING! Your password will be stored unencrypted in /home/gunter/.docker/config.json. Configure a credential helper to remove this warning. See Login Succeeded From now on, it is possible to execute docker login without the need for entering your credentials. $ docker login Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /home/gunter/.docker/config.json. Configure a credential helper to remove this warning. See Login Succeeded You can logout again with docker logout: $ docker logout Removing login credentials for Ensure that you are logged in again and run the Maven Jib build command: $ mvn compile jib:build The image is successfully build and pushed to Docker Hub. Two warnings are raised during the build: Base image 'openjdk:11.0.5-jre' does not use a specific image digest - build may not be reproducible This can be easily be solved by replacing openjdk:11.0.5-jre with the sha256 key for the base image openjdk@sha256:b3e19d27caa8249aad6f90c6e987943d03e915bbf3a66bc1b7f994a4fed668f6 The credential helper (docker-credential-pass) has nothing for server URL: This is a strange warning because the credentials for this URL are resolved and used for pushing the image. 4. Configure Jib for Our Application Now that we have configured the authentication in a secure way, we can continue with configuring the Jib Maven plugin for our application. We add a tag to our image and specifiy the main class. <to>  <credHelper>pass</credHelper> <tags> <tag>${project.version}</tag> </tags> </to> <container> <mainClass>com.mydeveloperplanet.myjibplanet.MyJibPlanetApplication</mainClass> </container> Do not add the tag format with value OCI to your container configuration. Docker Hub does not support yet OCI completely and an error message will be shown ‘An error occurred while loading the tags. Try reloading the page’ . Build the image again and pull the Docker image: $ docker pull mydeveloperplanet/myjibplanet Using default tag: latest latest: Pulling from mydeveloperplanet/myjibplanet 844c33c7e6ea: Pull complete ada5d61ae65d: Pull complete f8427fdf4292: Pull complete a5217f27a28f: Pull complete 176e83ebae4f: Pull complete 800204250483: Pull complete 492e142ab90b: Pull complete 7c8e6198cd4b: Pull complete c49bb7f02774: Pull complete Digest: sha256:b7144bfdf6ee47d6b38914a84789ef9f7e2117320080b28ce39c385ee399a0c8 Status: Downloaded newer image for mydeveloperplanet/myjibplanet:latest docker.io/mydeveloperplanet/myjibplanet:latest Run the image and map it to port 8080: $ docker run -p 127.0.0.1:8080:8080/tcp mydeveloperplanet/myjibplanet ... 2019-12-25 09:57:13.196 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' 2019-12-25 09:57:13.205 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 9 ms List the Docker containers: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c05e431b0bd1 mydeveloperplanet/myjibplanet "java -cp /app/resou…" 13 seconds ago Up 12 seconds 127.0.0.1:8080->8080/tcp recursing_meninsky We only need to retrieve the IP address of our Docker container: $ docker inspect c05e431b0bd1 ... "NetworkSettings": { ... "IPAddress": "172.17.0.2", ... } ... The URL of our application can now be invoked with. This returns us the welcome message: Hello Jib Maven Plugin! From host: c05e431b0bd1/172.17.0.2 We have one more issue to solve: our application runs as root in the Docker container. This is not something we want because of security. First, we will check which users are available in the Docker container: $ docker exec -it -u root c05e431b0bd1 cat /etc/passwd ... nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin ... The Docker container contains a user nobody, which is the one we can use for running our application. Add the user tag to the pom: <container> <mainClass>com.mydeveloperplanet.myjibplanet.MyJibPlanetApplication</mainClass> <user>nobody</user> </container> Build the image, pull it and run it. Verify with docker inspect whether nobody is used as user. ... "Config": { "Hostname": "76b3afaca3af", "Domainname": "", "User": "nobody", ... } ... In our pom, we also added Spring Actuator. There is no option to add a Docker healthcheck via Jib. This must be resolved with liveness probe and readiness probe in the Kubernetes configuration, see also this issue. 5. Conclusion We experimented with Jib Maven plugin in order to create our Docker images. Configuring the credentials for Docker Hub Registry was a real struggle, but once this was set up, the plugin is really easy to use. Besides that, no Docker daemon is needed and you do not need to write a separate Docker file. Last but not least, it is really fast. We will definitely be using this plugin in the near future. small typo: “The ${project.artifactId} contains the version 1.0-SNAPSHOT.”, should be: The ${project.artifactId} contains the artifactId myjibplanet. Thanks for the post 🙂 LikeLiked by 1 person You are absolutely right, I updated the post. Thank you for letting me know!
https://mydeveloperplanet.com/2020/01/15/create-fast-and-easy-docker-images-with-jib/?like_comment=18946&_wpnonce=083bdf5774
CC-MAIN-2022-05
en
refinedweb
Prev C++ VC ATL STL Socket Code Index Headers Your browser does not support iframes. Re: Simplest way to download a web page and print the content to stdout with boost From: "Francesco S. Carta" <entuland@gmail.com> Newsgroups: comp.lang.c++ Date: Sun, 13 Jun 2010 15:03:42 -0700 (PDT) Message-ID: <18a07df9-ce87-4a2b-9c5a-cd51eb856881@x27g2000yqb.googlegroups.com> "Francesco S. Carta" <entul...@gmail.com> wrote: gervaz <ger...@gmail.com> wrote: On Jun 13, 1:42 pm, "Francesco S. Carta" <entul...@gmail.com> wrote: gervaz <ger...@gmail.com> wrote: Hi all, can you provide me the easiest way to download a web page (e.g.http= ://) and print the output to stdout using the boost library? Thanks, Mattia Yes, we can :-) Sorry, but you should try to find the way by yourself first - that's not hard, split the problem and ask Google, find pointers and follow them, try to write some code and compile it. If you don't succeed you can post here your attempts and someone will eventually point out the mistakes. -- FSC Ok, nice advice :P Here what I've done (adapted from what I've found reading the doc and googling): #include <iostream> #include <boost/asio.hpp> int main() { boost::asio::io_service io_service ; boost::asio::ip::tcp::resolver resolver(io_service) ; boost::asio::ip::tcp::resolver::query query("", "http"); boost::asio::ip::tcp::resolver::iterator iter = resolver.resolve(query); boost::asio::ip::tcp::resolver::iterator end; boost::asio::ip::tcp::endpoint endpoint; while (iter != end) { endpoint = *iter++; std::cout << endpoint << std::endl; } boost::asio::ip::tcp::socket socket(io_service); socket.connect(endpoint); boost::asio::streambuf request; std::ostream request_stream(&request); request_stream << "GET / HTTP/1.0\r\n"; request_stream << "Host: localhost \r\n"; request_stream << "Accept: */*\r\n"; request_stream << "Connection: close\r\n\r\n"; boost::asio::write(socket, request); boost::asio::streambuf response; boost::asio::read_until(socket, response, "\r\n\r\n"); std::cout << &response << std::endl; return 0; } But I'm not able to retrieve the entire web content. Other questions: - the while loop seems like an iterator loop, but what boost::asio::ip::tcp::resolver::iterator end stands for? Is a zero value? Whatever the value, in the framework of STL iterators the "end" one is simply something used to match the end of the container / stream / whatever so that you know there isn't more data / objects to get. You shouldn't worry about its actual value - I ignore the details too, maybe there is something wrong with your program and I'll have a look, but I'm pressed and I wanted to drop in my 2 cents. - to see the output I had to use &response, why? That's not good to pass the address of a container to an ostream unless you're sure its actual representation matches that of a null- terminated c-style string. In this case I suppose you have to convert that buffer to something else, in order to print its data. There is also the chance that you have to - call "read_until" to fill the buffer - pick out the data from the buffer (eventually flushing / emptying it) multiple times, until there is no more data to fill it. Hope that helps you refining your shot. I've played with your program a bit. Up to the line: request_stream << "GET / HTTP/1.0\r\n"; should be all fine. In particular, the loop that checks for the end of the endpoint list is fine because, as it seems, those iterators get automatically set to mean "end" if you don't assign them to anything - it works differently from, say, a std::list, where you have to explicitly refer to the end() method of a list instantiation. The first problem with your code is where you send the server the "Host" header. You should replace "localhost" with the domain name you want to read from - in this case: request_stream << "Host:\r\n"; Then we have the (missing) loop to retrieve the data. The function "read_until" that you are calling will throw when the socket has no more data to read, and consider also that all overloads of that function return a size_t with the amount of bytes that it has transferred to the buffer. Seems like you have to intercept the throw, in order to know when to stop calling it. Another option is to use the "read_until" overload that doesn't throw (it takes an error_code argument, instead) and maybe check if the returned size_t is not null - then you would break the loop. So far we're just filling the buffer. For printing it out you have to build an std::istream out of it and get the data out through the istream. Try to read_until "\r\n", not _until "\r\n\r\n", then getline on the istream to a string. If you want I'll post my (working?) code, but since I've learned a lot by digging my way, I think you can take advantage of doing the same. Have good coding and feel free to ask further details if you want - heck, reading boost's template declarations is not very good time... (don't exclude the fact that I could have said something wrong, it's something new for me too, I hope to be corrected by more experienced users out there, in such case) -- FSC Generated by PreciseInfo ™ )
https://preciseinfo.org/Convert/Articles_CPP/Socket_Code/C++-VC-ATL-STL-Socket-Code-100614010342.html
CC-MAIN-2022-05
en
refinedweb
We run our automated tests either on our local machines or in CI systems. In some cases, we cannot see what the tests are doing. When it is an API test, unless it provides some output to the console, we are unaware of what is going on until the test finishes. If it is a UI test, unless we see what happens in the browser, we again have no clue about what is going on. This is why, in some cases, we need to output information to the console. This information will give us an overview of the test state or data used by the test. One option to write test output to the console is to use the Apache Log4j library. The simplest use of this library in our test automation project is to log information we tell it to. This is very similar to performing a System.out, but it is a bit more configurable. And the output can be configured to follow certain patterns, so that we can easily search for particular types of output. What sort of information would we want to output though? Well, if we are talking about long UI tests, we could send a message to the console when a certain step begins, or when it finished successfully. If we are using some random generated data, like dates or names, we could print these to the console. Such information would help with easily trying to manually reproduce a scenario that failed. The more information we output, the better we understand what our tests are doing. However, too much output will make it really hard to read it, and we will tend to skip certain information, or simply not see it. Therefore it’s important to only log information that is relevant to the test run and possible test failure. Importing Log4j into your Maven project If you have a Maven project, you should go to the Maven repository site, and search for ‘log4j-core’. Then, pick the latest release version. For now, here is the dependency you should add to your pom.xml file for the currently available version: <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.14.1</version> </dependency> Don’t forget to perform a ‘clean install’ operation on your project before proceeding to use this library in your tests. Configuration file Before using the library, you need to create a configuration file. The only requirement regarding where it should be placed is for it to be in the classpath. But because this is a configuration used in tests, I recommend creating it under src\test\resources. The name of the file should be ‘log4j2.xml’. A basic configuration file can look like this: <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss} %-5level %class.%method{36} - %msg%n"/> </Console> </Appenders> <Loggers> <Root level="all"> <AppenderRef ref="Console"/> </Root> </Loggers> </Configuration> In the Appenders section, the pattern for the system output is configured. This means that every time something will be logged, as per the above example, the following information will be printed to the console: a timestamp consisting of hour, minute and seconds; the level (which I will discuss in the next paragraph); the class and method name from where the logging is done; the actual message, as a String, and then a newline. An example of what the formatted output looks like: 22:05:05 INFO com.imalittletester.log4j.Log4jTest.firstTest - This is a message of type: info In the ‘Root’ tag, we set the logging level. In the above example, the logging level is ‘all’. Logging levels As I have mentioned earlier, too much logged information could make it more difficult to read it. But it would be great to log the most crucial information. Or, at least, if we log everything, to have a way to, for a certain test run, ignore the irrelevant information. This is what levels are useful for. By default, Log4j supports a few standard levels. They are, in decreasing order of severity: FATAL, ERROR, WARN, INFO, DEBUG, TRACE. When we want to log information, the FATAL level should be used for the most crucial one. If we think of how developers would use this level: they would signal that a severe error occurred while running the application. Next, when they would use ERROR: they would still signal an ERROR, but with less impact than a FATAL error. Of course, the least severe level would be TRACE. Because, as we will see in the ‘Setting the logging level’ paragraph, we can configure the levels, consider that we can restrict a test run to only show the most relevant output, or to show everything. Therefore do differentiate between these levels when you are logging the test output. Logging in tests First, we need to initialize the logger. We can do this either directly in a test class, or in a class that each test extends (a base class). Let’s assume that all tests extend a base class. In this case, we will initialize a protected variable: protected static final Logger LOGGER = LogManager.getLogger(); The required imports for our logging are: import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; Now, each test class can log test run information by using the “LOGGER” variable. Logging is done by calling a method on the LOGGER variable corresponding to the level for which we are logging. Based on each level, the console output will be colored differently. Although in tests we might only need 2-3 levels, let’s look at how to log for all standard ones. Here we are passing a simple String as parameter. That parameter represents the message we want to display at the console: - trace: LOGGER.trace("This is a message of type: trace"); - debug: LOGGER.debug("This is a message of type: debug"); - info: LOGGER.info("This is a message of type: info"); - warn: LOGGER.warn("This is a message of type: warn"); - error: LOGGER.error("This is a message of type: error"); - fatal: LOGGER.fatal("This is a message of type: fatal"); The output of all the above commands, when running the tests from IntelliJ, looks as follows: As you can see in the screenshot, INFO looks exactly as any regular System.out. The rest of the resulting logging details are colored based on ‘severity’. The ‘fatal’ logging is highlighted in red. It suggests some information that is of utmost importance. Similarly, the ‘error’ level is highlighted in orange. It is the second most important type of logging information. You can think about logging with different levels as: the nice to have details should be on a lower severity level, like INFO. The most important information, that needs to easily pop into view when running the tests, should be on the FATAL or ERROR levels. Note: in certain CI systems, the colors will not appear in the console output. When running tests from an IDE like IntelliJ, they do. In this case, in the CI, if you are looking for a certain output, you can search based on the level name. Setting the logging level In the case of these examples, all the above commands ran successfully (all the logging was outputed). That is because in the log4j configuration file the level, in the Root tag, is set to ‘all’. This setting helps restrict what type of information you want to see when running a test. Let’s say that at a certain run, you only want to see the ‘must have’ information, not the ‘nice to have one’. For such a situation, you need to set the level to a different, more restrictive value. For example, if you want to see only ‘error’ and ‘fatal’ information, the level in the ‘Root’ tag should be set to ‘error’. Further reading The ‘log4j’ library has much more potential. Read all about other library features you can use in their official documentation.
https://blog.testproject.io/2021/05/05/logging-test-automation-information-with-log4j/
CC-MAIN-2022-05
en
refinedweb
Indexed polygonal face shape node. More... #include <Inventor/nodes/SoIndexedFaceSet.h> Indexed polygonal face shape node.. Skeleton to create a polygon with holes // 1) Choose a winding rule with windingType field from SoShapeHints. SoShapeHints *myShapeHints = new SoShapeHints; myShapeHints->windingType = SoShapeHints::ODD_TYPE; // Create list of contours. static int32_t indices[21] = { 0, 3, 1, SO_END_CONTOUR_INDEX, 5, 6, 4, SO_END_POLYGON_INDEX, // To end the first polygon. 0, 7, 3, SO_END_CONTOUR_INDEX, 10, 9, 8, SO_END_CONTOUR_INDEX, 9, 7, 0, 8, SO_END_POLYGON_INDEX // To end the second polygon. }; // Note: The last polygon must end with either SO_END_POLYGON_INDEX or SO_END_CONTOUR_INDEX or nothing static int32_t indices[21] = { 0, 3, 1, SO_END_CONTOUR_INDEX, 5, 6, 4, SO_END_POLYGON_INDEX, 0, 7, 3, SO_END_CONTOUR_INDEX, 10, 9, 8, SO_END_CONTOUR_INDEX, 9, 7, 0, 8 };. SoCoordinate3, SoDrawStyle, SoFaceDetail, SoFaceSet, SoFullSceneAntialiasing, SoIndexedTriangleSet, SoShapeHints, SoVertexProperty Creates an indexed face set node with default settings. Returns the type identifier for this class. Reimplemented from SoIndexedShape. Reimplemented in SoGeoElevationGrid, and SoVolumeIndexedFaceSet. Returns the type identifier for this specific instance. Reimplemented from SoIndexedShape. Reimplemented in SoGeoElevationGrid, and SoVolumeIndexedFaceSet.
https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_indexed_face_set.html
CC-MAIN-2022-05
en
refinedweb
Created on 2015-08-14 02:55 by Andre Merzky, last changed 2015-09-12 23:36 by Andre Merzky. - create a class which is a subclass of multiprocessing.Process ('A') - in its __init__ create new thread ('B') and share a queue with it - in A's run() method, run 'C=subprocess.Popen(args="/bin/false")' - push 'C' though the queue to 'B' - call 'C.pull()' --> returns 0 Apart from returning 0, the pull will also return immediately, even if the task is long running. The task does not die -- 'ps' shows it is well alive. I assume that the underlying reason is that 'C' is moved sideways in the process tree, and the wait is happening in a thread which is not the parent of C. I assume (or rather guess, really) that the system level waitpid call raises a 'ECHILD' (see wait(2)), but maybe that is misinterpreted as 'process gone'? I append a test script which shows different combinations of process spawner and watcher classes. All of them should report an exit code of '1' (as all run /bin/false), or should raise an error. None should report an exit code of 0 -- but some do. PS.: I implore you not to argue if the above setup makes sense -- it probably does not. However, it took significant work to condense a real problem into that small excerpt, and it is not a full representation of our application stack. I am not interested in discussing alternative approaches: we have those, and I can live with the error not being fixed. #!/usr/bin/env python from subprocess import Popen from threading import Thread as T from multiprocessing import Process as P import multiprocessing as mp class A(P): def __init__(self): P.__init__(self) self.q = mp.Queue() def b(q): C = q.get() exit_code = C.poll() print "exit code: %s" % exit_code B = T(target = b, args=[self.q]) B.start () def run(self): C = Popen(args = '/bin/false') self.q.put(C) a = A() a.start() a.join() I'll let someone else analyze this in detail if they want to, but I'll just note that mixing multiprocessing and threads is not a good idea and will lead to all sorts of strangeness. Especially if you are using the unix default of fork for multiprocessing. As mentioned in the PS, I understand that the approach might be questionable. But (a) the attached test shows the problem also for watcher *processes*, not threads, and (b) an error should be raised in unsupported uses, not a silent, unexpected behavior which mimics success. Looking a little further, it seems indeed to be a problem with ignoring SIGCHLD. The behavior has been introduced with [1] at [2] AFAICS, which is a response to issue15756 [3]. IMHO, that issue should have been resolved with raising an exception instead of assuming that the child exited successfully (neither is true in this case, not the 'exited' nor the 'successfully'). [1] [2] [3] Hi again, can I do anything to help moving this forward? Thanks, Andre. Not really. Give GPS a couple more weeks to respond, and then maybe bring it up on python-dev. My gut feeling is to say "don't do that" when it comes to passing subprocess instances between processes (ie: pickling and unpickling them) and expecting them to work as if nothing had changed... You already acknowledge that this is a strange thing for an application to do and that you have a workaround in your application. BUT: It does looks like we are doing something a bit weird here with the waitpid errno.ECHILD exception. However letting this bubble up to the application may, at this point, cause new bugs in code that isn't expecting it so I'm not sure we should change that in any circumstances. :/ FWIW there is also a comment at the end of the related issue1731717 (for Popen.wait() rather than .poll()) with a suggestion to ponder (though not directly related to this issue, if it is still relevant). Yes, I have a workaround (and even a clean solution) in my code. My interest in this ticket is more academic than anything else :) Thanks for the pointer to issue1731717. While I am not sure which 'comment at the end' you exactly refer to, the whole discussion provides some more insight on why SIGCHLD is handled the way it is, so that was interesting. I agree that changing the behavior in a way which is unexpected for existing applications is something one wants to avoid, generally. I can't judge if it is worth to break existing code to get more correctness in a corner case -- depends on how much (and what kind of) code relies on it, which I have no idea about. One option to minimize change and improve correctness might be to keep track of the parent process. So one would keep self.parent=os.getpid() along with self.pid. In the implementation of _internal_poll one can then check if self.parent==os.getpid() still holds, and raise an ECHILD or EINVAL otherwise. That would catch the pickle/unpickle across processes case (I don't know Python well enough to see if there are easier ways to check if a class instance is passed across process boundaries). The above would still not be fully POSIX (it ignores process groups which would allow to wait on non-direct descendants), but going down that route would probably almost result in a reimplementation of what libc does... This is patch is meant to be illustrative rather than functional (but it works in the limited set of cases I tested).
https://bugs.python.org/issue24862
CC-MAIN-2021-21
en
refinedweb
Vue3 Sidebar is a panel component displayed as an overlay at the edges of the screen. It supports Vue 3 with PrimeVue 3 and Vue 2 with PrimeVue 2. Setup Refer to PrimeVue setup documentation for download and installation steps for your environment such as Vue CLI, Vite or browser. Import import Sidebar from 'primevue/sidebar'; Getting Started Sidebar is used as a container and visibility is controlled with the visible property that requires a v-model two-way binding. <Sidebar v-model: Content </Sidebar> <Button icon="pi pi-arrow-right" @ Position Sidebar can either be located on the left (default), right, top or bottom of the screen depending on the position property. <Sidebar v-model: Content </Sidebar> Size Sidebar size can be changed using a fixed value or using one of the three predefined ones. <Sidebar v-model:</Sidebar> <Sidebar v-model:</Sidebar> <Sidebar v-model:</Sidebar> Full Screen Full screen mode allows the sidebar to cover whole screen. <Sidebar v-model: Content </Sidebar> Theming Sidebar supports various themes featuring Material, Bootstrap, Fluent as well as your own custom themes via the Designer tool. Resources Visit the PrimeVue Sidebar showcase for demos and documentation. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/primetek/vue3-sidebar-312k
CC-MAIN-2021-21
en
refinedweb
Just before Dreamforce, Dave Carroll and I hosted the Winter ’13 developer preview webinar talking about what’s new to the platform. One of the major features that became generally available was Visualforce Charting. Visualforce Charting is an easy way to create customized business charts, based on data sets you create directly from SOQL queries or by building the data set in JavaScript. Sandeep Bhanot posted an article a while back going over how to build layered line and bar charts with a controller, and in this article I am going to go over new chart types and techniques. Visualforce charts are rendered client-side using JavaScript. This allows charts to be animated and, on top of that, chart data can load and reload asynchronously which can make the page feel more responsive. In this article, I want to highlight a few of the new chart types that were recently released, and demonstrate some of the advanced rendering capabilities. You can build this data set similarly to how you would build a data set for a pie chart. The main difference is that there is only one list element to build the complete chart. In the example below, I am summarizing the total amount of opportunities closed this month related to an account. public class GaugeChartController { public String acctId {get;set;} public GaugeChartController(ApexPages.StandardController controller){ acctId = controller.getRecord().Id; } public List<gaugeData> getData() { Integer TotalOpptys = 0; Integer TotalAmount = 0; Integer thisMonth = date.Today().month(); AggregateResult ClosedWonOpptys = [select SUM(Amount) totalRevenue, CALENDAR_MONTH(CloseDate) theMonth, COUNT(Name) numOpps from Opportunity where AccountId =: acctId and StageName = 'Closed Won' and CALENDAR_MONTH(CloseDate) =: thisMonth GROUP BY CALENDAR_MONTH(CloseDate) LIMIT 1]; List<gaugeData> data = new List<gaugeData>(); data.add(new gaugeData(Integer.valueOf(ClosedWonOpptys.get('numOpps')) + ' Opptys', Integer.valueOf(ClosedWonOpptys.get('totalRevenue')))); return data; } public class gaugeData { public String name { get; set; } public Integer size { get; set; } public gaugeData(String name, Integer data) { this.name = name; this.size = data; } } } The structure for building this chart is almost identical to what we’ve seen already with the previously existing chart types. To populate the chart with data, you need to build a list with an inner class (aka wrapper class). One thing to note, your wrapper class must have a ‘name’ field if you want to display a tooltip on hover. Once your data set is constructed, you can output it on your Visualforce page with a few simple components: <apex:page <apex:chart <apex:axis <apex:gaugeSeries </apex:chart> </apex:page> The chart to the left is what the output looks like initially. If you try to use CSS to manipulate the width and/or height frame, you will get to watch the frame dynamically continue to cut off your text. Thankfully the frame problem can be alleviated using a little bit of JavaScript. First give your chart a name. This makes the component recognizable as a JavaScript object for additional configurations or dynamic operations. Note this name must be unique across all chart components, and if the encompassing top-level component (<apex:page> or <apex:component>) is namespaced, the chart name will be prefixed with it (ie. MyNamespace.MyChart). In my page above I named the chart “MyChart.” I was able to manipulate all of the axes by using a simple on() method call. The code below is the snippet that I put into my page, and the picture below that displays the new results. <script> MyChart.on('beforeconfig', function(config) { config.axes[0].margin=-10; }); </script> The radar chart is a unique chart to build. In order to create this data set, you will need to create a list of maps rather than using a wrapper class. For my example, I am plotting customer satisfaction ratings related to an account. Each of these ratings are stored as number fields to plot on a circle. In Winter ’13 you can now query field sets, so I have put all of the rating fields from my account into a field set (named RadarSet) to build out my chart. One thing to note, you don’t have to use a field set to build this chart. I used a field set because I thought it would be more elegant to dynamically query for and generate chart data. You could build your own hardcoded SOQL query if you wanted, but by using field sets you can easily change the fields in the query without having to change the code, or you could also take this code and make another radar chart quickly off of a different field set. public class RadarDemo{ public List<Map<Object,Object>> data = new List<Map<Object,Object>>(); public String acctId {get;set;} public RadarDemo(ApexPages.StandardController controller){ acctId = controller.getRecord().Id ; } public List<Schema.FieldSetMember> getFields() { return SObjectType.Account.FieldSets.RadarSet.getFields(); } public List<Map<Object,Object>> getData() { String query = 'SELECT '; List<String> fieldNames = new List<String>(); for(Schema.FieldSetMember f : getFields()){ query += f.getFieldPath() + ', '; fieldNames.add(f.getFieldPath()); } query += 'Id, Name FROM Account where Id=\'' + acctId + '\' LIMIT 1'; SObject myFieldResults = Database.Query(query); Schema.DescribeSObjectResult R = myFieldResults.getSObjectType().getDescribe(); Map<String, Schema.SObjectField> fieldMap = R.fields.getmap(); //creates a map of labels and api names Map<String,String> labelNameMap = new Map<String,String>(); for(String key : fieldMap.keySet()){ labelNameMap.put(fieldMap.get(key).getDescribe().getName(), fieldMap.get(key).getDescribe().getlabel()); } //creates a map of labels and values for(String f : fieldNames){ String fieldLabel = labelNameMap.get(f); String fieldValue = String.valueOf(myFieldResults.get(f)); Map<Object, Object> m = new Map<Object,Object>(); m.put('field', fieldLabel); m.put('value', fieldValue); data.add(m); } return data; } } To explain the code above, I have broke down the order of operations in my class as follows: The Visualforce page is again pretty trivial. In addition, some of my field labels were a little long and were cut off by the frame again. The JavaScript hack above didn’t do the trick, but there is a static ID associated with the frame that cuts off the chart, so I was able to apply sizing to that explicitly so that the chart would render properly. <apex:page <style> #vfext4-ext-gen1026 { width:800px !important; } </style> <apex:chart <apex:legend <apex:axis <apex:radarSeries </apex:chart> </apex:page> The final graph I’m going to go over is the scatter series chart. In addition to feeding data directly into the chart from a standard getter method in your Apex class, you can provide the component with the name of a JavaScript function that generates the data. The actual JavaScript function is defined in or linked from your Visualforce page and has the opportunity to manipulate the results before passing it to your chart, or to perform other user interface or page updates. In my example I am querying for all opportunities related to a campaign and plotting them on a graph with the x-axis displaying the expected revenue value and the y-axis displaying the actual amount. By default it will display all opportunities, but I am using an actionFunction on a selectList to rerender the chart to display records of a specific lead source. First, let’s take a look at my JavaScript function on my Visualforce page constructing the chart data. function getRemoteData(callback) { ScatterChartController.getRemoteScatterData(function(result, event) { var sourceType = $('[id*="leadSource"]').val(); var newResultList = new Array(); var index = 0; if(event.status && result && result.constructor === Array) { for(i = 0; i < result.length; i++){ if(result[i].type == sourceType ){ newResultList[index] = result[i]; index++; } else if (sourceType == 'All') { newResultList[index] = result[i]; index++; } } callback(newResultList); } }); } In my function I first call a remote action, a method inside my Apex class, called getRemoteScatterData. After I check that the result returned from the remote action is valid, I loop through the results to either save everything if ‘All’ is selected or only the opportunities with the selected lead source. I used a jQuery selector to grab the selected value because Visualforce appends extra characters to the id of the selectList. By default my selectList shows everything, but you could easily add in more JavaScript functionality like adding a show/hide to only show the chart after a value is selected. There is a lot of flexibility with how you want the chart to render in here. My remote action (displayed below) is very simple in comparison. It runs the query on all opportunities related to the campaign and then saves the result to a list using my wrapper class. The values being sent to my wrapper class are stored to variables ‘name’, ‘type’, ‘expected’, and ‘amount’ respectively. This is why in my JavaScript function I can reference result[i].type. @RemoteAction public static List getRemoteScatterData() { List data = new List(); List opps = [select Name, Id, Amount, ExpectedRevenue, LeadSource from Opportunity where CampaignId =: campId]; for(Opportunity opp : opps){ data.add(new scatterData(opp.Name, opp.LeadSource,Integer.valueOf(opp.ExpectedRevenue), Integer.valueOf(opp.Amount))); } return data; } Back on my Visualforce page, I have two main sections other than the JavaScript that I want to breakdown. First, we have the chart. This pulls in the ‘newResultList’ returned from the callback in my getRemoteData function. The component does some behind-the-scenes magic on the back end, so for proper syntax you only need to reference the name of the function you are calling. Map the associated attributes with the appropriate variables in your wrapper class, and add an outputPanel around the chart for advanced rendering. <apex:outputPanel <apex:chart <apex:scatterSeries <apex:axis <apex:chartLabel /> </apex:axis> <apex:axis <apex:chartLabel /> </apex:axis> </apex:chart> </apex:outputPanel> In order to rerender this chart, I created an actionFunction that gets called onChange of the list value. I had to do this because there is no reRender attribute on the selectList tag, but the actionFunction action is a simple method in my class (public PageReference NoOp()) that does nothing except return null. When the NoOp method finishes, the chart will rerender and call the JavaScript function again using the new value in the list to sort the chart points. <apex:actionFunction <apex:selectList <apex:selectOptions </apex:selectList> Visualforce charting enables you to quickly generate rich, animated charts, without having to use a 3rd party system for the meat of it. There are definitely some kinks in the process right now being that it just went GA in Winter ’13, but I’m sure we’ll be seeing updates and enhancements with upcoming releases in the future. I have done a few examples here, and there are more examples elsewhere on developer.force.com, but there’s no better way to learn than to try it out yourself. In addition to the examples I have put here, I have also uploaded a full sample pack on GitHub including a complete Apex controller and Visualforce page for each chart type. Take a look at those examples, and feel free to reach out to me via twitter if you have any questions.
https://developer.salesforce.com/blogs/developer-relations/2012/10/animated-visualforce-charts.html
CC-MAIN-2018-34
en
refinedweb
Publishing npm packages is only a npm publish away. Assuming the package name is still available and everything goes fine, you should have something out there! After this, you can install your package through npm install or npm i. Most of the community follows a specific versioning convention which you should understand. It comes with its downsides but given that the majority use and understand it, it’s worth covering. Most popular packages out there follow SemVer. Roughly, SemVer states that you should not break backward compatibility, given certain rules are met: The rules are different for 0.x versions. There the rule is 0.<MAJOR>.<MINOR>. For packages considered stable and suitable for public usage ( 1.0.0 and above), the rule is <MAJOR>.<MINOR>.<PATCH>. For example, if the current version of a package is 0.1.4 and a breaking change is performed, it should bump to 0.2.0. Given SemVer can be tricky to manage, ComVer exists as a backwards compatible alternative. ComVer can be described as a binary decision <not compatible>.<compatible>. You can understand SemVer much better by studying the online tool and how it behaves. Not all version number systems are created equal. Sometimes people prefer to use their own and go against the mainstream. Sentimental versioning by Dominic Tarr discusses this phenomenon. To increase the version of your packages, you need to invoke one of these commands: npm version <x.y.z>- Define version yourself. npm version <major|minor|patch>- Let npm bump the version for you using SemVer. npm version <premajor|preminor|prepatch|prerelease>- Same as previous expect this time it generates -<prerelease number>suffix. Example: v2.1.2-2. Invoking any of these updates package.json and creates a version commit to git automatically. If you execute npm publish after doing this, you should have a new version out there. Sometimes, you want to publish something preliminary to test. Tag your release as below: The initial alpha release allows the users to try out the upcoming functionality and provide feedback. The beta releases can be considered more stable. The release candidates (RC) are close to an actual release and don’t introduce any new functionality. They are all about refining the release till it’s suitable for general consumption. The workflow has two steps: npm version 0.5.0-alpha1- Update package.json as discussed earlier. npm publish --tag alpha- Publish the package under alpha tag. To consume the test version, your users have to use npm install <your package name>@alpha. npm link allows you to link a package as a globally available symbolic link. Node resolves to the linked version unless local node_modulesexists. Use npm unlinkor npm unlink <package>to remove the link. It’s possible that your package reaches the end of its life. Another package could replace it, or it can become obsolete. For this purpose, npm provides npm deprecate command. You can state npm deprecate [email protected]"< 0.4.0" "Use bar package instead". You can deprecate a range or a whole package by skipping the range. Given mistakes happen, you can undeprecate a package by providing an empty message. Deprecation can be handy if you have to rename a package. You can publish the package under a new name and let the users know of the new name in your deprecation message. There is a heavier duty option in the form of npm unpublish. Using npm unpublish you can pull a package out of the registry. Given this can be potentially dangerous and break the code for a lot of people, it has been restricted to versions that are less than 24 hours old. Most likely you don’t need the feature at all, but it’s nice to know it exists. As packages evolve, you likely want to start developing with others. You could become the new maintainer of a project, or pass the torch to someone else. These things happen as packages evolve. npm provides certain commands for these purposes. It’s all behind npm owner namespace. More specifically, there are npm owner ls <package name>, npm owner add <user> <package name> and npm owner rm <user> <package name>. That’s about it. When publishing npm packages, you should take care to follow SemVer carefully. Consider ComVer as it’s a simpler backwards compatible alternative. Use tooling to your advantage to avoid regressions and to keep your user base happy. You’ll learn how to build npm packages in the next chapter. This book is available through Leanpub. By purchasing the book you support the development of further content.
https://survivejs.com/maintenance/packaging/publishing/index.html
CC-MAIN-2018-34
en
refinedweb
IBM Cognos 8 Version 8.4.1 Administration and Security Guide Product Information This document applies to IBM Cognos 8 Version 8.4.1Mlogo,ibm.com,and Cognos are trademarks or registered trademarks of International Business Machines Corp.,in many jurisdictions worldwide.Other product and service names might be trademarks of IBM or other companies.A current list of IBM trademarks is available on the Web at.. Microsoft product screen shot(s) reprinted with permission from Microsoft Corporation. Table of Contents Introduction 27 Chapter 1:What’s New?31 New Features in Version 8.4 31 Additional Language Support 31 Support for IBMMetadata Workbench as a Lineage Solution 32 Access to IBMWebSphere Business Glossary 32 Managing Comments Using Cognos Viewer 32 Adding Application Context to Dynamic SQL 32 Support for New Data Sources 32 Support for New Portal Versions 32 Hiding Entries 33 Updating Published PowerCubes 33 Object Capabilities 33 Schedule Credentials 33 Save History Details for Job Steps 33 Viewing Lineage Information 33 Enhanced Drill-through Capabilities 33 Metric Studio Content in Portal Pages 34 Changed Features in Version 8.4 34 Composite Information Server is Replaced By IBMCognos 8 Virtual View Manager 34 IBMCognos 8 Portal Services 34 Deprecated Features in Version 8.4 35 IBMCognos 8 PowerCube Connection Utility and cubeswap 35 New Features in Version 8.3 35 Improved User Interface for Administrators 35 Access to More Documentation 35 More Granular Administrative Capabilities 35 Access to System Statistics 35 Snapshot of System Health 36 Managing Queues 36 Reducing Deployment Details 36 Setting Priority on Schedules 36 Better Control of Interactive Features in Reports 36 New Sample Audit Reports 36 Publishing and Managing Packages in Non-root Folders 37 Enabling Report Studio Authoring Modes 37 Server Administration 37 Transformer Integrated into IBMCognos 8 37 My Activities and Schedules 38 My Watch Items 38 Report Alerts 38 Watch Rules 38 Drilling Through on Multiple Values 39 Licensed Materials – Property of IBM 3© Copyright IBMCorp.2005,2009. Go Directly to Target Report When Only One Target Report Exists 39 Support for Microsoft Excel 2007 39 Saving Report Outputs to a File System 39 Resubmitting Failed Jobs and Reports 40 Resubmitting Failed Agent Tasks 40 Default Actions for Agent Items 40 Tabbed Portal Pages 40 Global Filters and Enhanced Portal Interactivity 40 Metric Studio Content in Portal Pages 41 Support for Microsoft SharePoint Portal 2003 and 2007 41 Changed Features in Version 8.3 41 Updated IBMCognos Connection Look 41 More Information in the Go To Page 41 Cognos Watchlist Portlet 42 Replaced Capability 42 Part 1:IBMCognos 8 Administration Chapter 2:IBMCognos 8 Administration 43 IBMCognos Administration 43 Automating Tasks 44 Setting up a Multilingual Reporting Environment 46 Configuring Your Database For Multilingual Reporting 47 Installing Fonts 47 IBMCognos 8 Default Font 48 Report Studio Fonts 48 Set Up Printers 48 Configuring Web Browsers 50 Allow User Access to Series 7 Reports from IBMCognos Connection 52 Restricting Access to IBMCognos 8 53 Chapter 3:Building IBMCognos 8 Applications 55 Chapter 4:Samples 57 The Great Outdoors Company Samples 57 The Great Outdoors Group of Companies 58 Employees 60 Sales and Marketing 60 Great Outdoors Database,Models,and Packages 61 Setting Up the Samples 63 Restore Backup Files for the Samples Databases 64 Create Data Source Connections to the Samples Databases 67 Set Up Microsoft Analysis Services Cube Samples 68 Set Up the Essbase Cube Sample 69 Create Data Source Connections to OLAP Data Sources 70 Set Up the Metric Studio Sample 73 Import the Samples 74 Sample Database Models 76 Example - Running the Sample ELMReturns Agent Against Changed Data 76 Remove the Samples Databases from IBMCognos 8 78 4 Table of Contents Chapter 5:Setting up Logging 79 Log Messages 80 Logging Levels 80 Set Logging Levels 82 Audit Reports 82 Setting Up Audit Reporting 83 Sample Audit Model and Audit Reports 83 View Full Details for Secure Error Messages 88 Disable the Creation of Core Dump Files 89 Part 2:System Administration Chapter 6:System Performance Metrics 91 How Metric Data is Gathered 91 System Metrics 92 Panes on the Status System Page 99 Assess System Performance 100 View Attributes for Metric Scores 101 Set Metric Threshold Values 101 Reset Metrics 103 Reset Metrics for the System 103 Chapter 7:Server Administration 105 Dispatchers and Services 105 Stop and Start Dispatchers and Services 108 Activate a Content Manager Service 110 Remove Dispatchers from the Environment 111 Group Dispatchers in Configuration Folders 112 Specify Advanced Dispatcher Routing 113 Specify Gateway Mappings for Series 7 PowerPlay Data 116 Rename Dispatchers 116 Test Dispatchers 117 Administering Failover for Multiple Dispatchers 118 Securing Dispatchers 120 Content Manager Computers 121 Managing Database Connection Pool Settings for Content Manager 121 Sorting Entries for Non-English Locales 122 Managing Content Manager Synchronization 123 Control Browsing of External Namespaces 123 Set Advanced Content Manager Parameters 123 Override the (Default) Locale Processing in the Prompt Cache 124 Maintain the Content Store 125 Migrating PowerPlay Reports Published to IBMCognos 8 127 Tune Server Performance 127 Set Server Group Names for Advanced Dispatcher Routing 128 Balance Requests Among Dispatchers 129 Use Cluster Compatible Mode for Dispatchers 129 Set Usage Peak Periods 130 Set the Maximum Number of Processes and Connections 131 Specify Queue Time Limits 133 Administration and Security Guide 5 Table of Contents PDF File Settings 134 Set Maximum Execution Time 137 Specify How Long to Keep Watch List Report Output 138 Limit Hotspots That are Generated in an Analysis Studio or Report Studio Chart 139 Set Compression for Email Attachments 140 Set the Report Size Limit for the Report Data Service 141 Set Parameters for Concurrent Query Execution 141 Set Query Prioritization 143 Disable Session Caching 145 Reduce Decimal Precision 147 Add Application Context to Dynamic SQL 147 Saved Report Output 149 Set File Location to Save a Copy of Report Output Outside of IBMCognos 8 149 Set File Location to Save Copy of Report Output in IBMCognos 8 150 Configure the Lineage Solution 152 Configure the IBMWebSphere Business Glossary URI 153 Enable Job,SMTP,and Task Queue Metrics 153 Part 3:Data Management Chapter 8:Data Sources and Connections 155 DB2 156 IBMCognos Cubes 157 IBMCognos Finance 157 IBMCognos Now!Cube 158 IBMCognos Planning - Contributor 158 IBMCognos Planning - Series 7 159 IBMCognos PowerCubes 159 IBMDB2 OLAP and Hyperion Essbase Data Source 164 Configure Scenario Dimensions 165 Specify Balanced Hierarchies 165 Specify Measure Formats 166 Specify the Attributes of a Dimension 167 IBMInfosphere Warehouse Cubing Services 168 Informix Data Sources 169 Microsoft Analysis Services Data Sources 169 Microsoft SQL Server Data Sources 173 ODBC Data Sources 176 Oracle Data Sources 178 SAP Business Information Warehouse (SAP BW) Data Sources 179 TM1 Data Sources 181 XML Data Sources 182 Create a Data Source 183 Add or Modify a Data Source Connection 186 Manage Data Source Connections 187 Create or Modify a Data Source Signon 189 Specifying Isolation Levels 190 Using Database Commands for Oracle,DB2,and Microsoft SQL Server 193 Example - Open Connection Command Block 194 Example - Close Connection Command Block 194 6 Table of Contents Add Command Blocks While Creating a Data Source 194 Add or Modify Command Blocks for a Connection 194 Command Block Examples 195 Securing Data Sources 198 Chapter 9:Back Up Data 199 Chapter 10:Data Movement Entries 201 Run a Data Movement Entry 201 Change Default Data Movement Entry Properties 202 Part 4:Security Administration Chapter 11:Security Model 203 Authentication Providers 203 Deleting or Restoring Unconfigured Namespaces 204 Authorization 205 Cognos Namespace 205 IBMCognos Application Firewall 206 Data Validation and Protection 206 Logging and Monitoring 207 Chapter 12:Users,Groups,and Roles 209 Users 209 User Locales 210 Groups and Roles 210 Create an Cognos Group or Role 212 Add or Remove Members of an Cognos Group or Role 213 Chapter 13:Access Permissions 215 Set Access Permissions for an Entry 218 Trusted Credentials 220 Create Trusted Credentials 220 Chapter 14:Secured Functions and Features 223 Set Access to a Secured Function or Feature 227 Set Access to the Report Studio User Interface Profiles 229 Chapter 15:Object Capabilities 231 Set Up Object Capabilities for a Package 233 Chapter 16:Initial Security 235 Built-in Entries 235 Predefined Entries 236 Specify Security Settings After Installation 238 Securing the Content Store 239 Part 5:Content Administration in IBMCognos Connection Chapter 17:IBMCognos Connection 241 Log On 241 Log Off 241 Create a Shortcut 242 Administration and Security Guide 7 Table of Contents Create a URL 243 Bookmark an Entry 244 Models and Packages 245 Entry Properties 245 General Properties 245 Permissions 248 Report,Query,Analysis,and PowerPlay Report Properties 249 Job Properties 250 Agent Properties 251 Page Properties 251 Rule Properties 252 Organizing Entries 253 Copy an Entry 253 Move an Entry 254 Rename an Entry 255 Disable an Entry 255 Hide an Entry 256 Select a Link for an Entry 258 Delete an Entry 259 Specify the Order of Entries 259 Create a Folder 260 Specify an Alternative Icon for an Entry 261 Personalize the Portal 262 My Watch Items 265 View Watch Items 266 Remove Yourself from an Alert List 266 Edit a Watch Rule 266 Chapter 18:Pages and Dashboards 269 Create a Page 271 Edit a Page 272 Share a Page 272 Modify a Portlet 273 Enable Communication Between Cognos Portlets 273 Portal Tabs 275 Add a Tab 275 Delete a Tab 276 Reorder the Tabs 277 Change Your Home Page 277 Create a Dashboard with Multiple Tabs 277 Adding Interactivity to Pages and Dashboards 280 Defining Global Filters 280 Enable Sharing of Drill-up and Drill-down Actions 285 Enable Sharing of Drill-through Actions 285 Chapter 19:Activities Management 287 Manage Current Activities 287 Manage Past Activities 289 Manage Upcoming Activities for a Specific Day 291 8 Table of Contents Manage Scheduled Activities 293 Manage Entry Run Priority 295 View the Run History for Entries 296 Specify How Long to Keep Run Histories 298 Rerun a Failed Entry Task 299 Chapter 20:Schedule Management 301 Schedule an Entry 302 Example - Schedule an Entry on the Last Day of the Month 303 Use Jobs to Schedule Multiple Entries 303 Trigger-based Entry Scheduling 306 Setting Up Trigger-based Scheduling 306 Schedule an Entry Based on an Occurrence 307 Chapter 21:Deployment 309 Deployment Specifications 309 Deployment Archives 310 Deployment Planning 310 Security and Deployment 310 Deploying the Entire Content Store 311 Deploying Selected Public Folders and Directory Content 313 Deployment Rules When Importing 317 Rules For Deploying the Entire Content Store 318 Rules For Partial Deployment 322 Deploying IBMCognos 8 Entries 326 Create an Export Deployment Specification 327 Move the Deployment Archive 330 Import to a Target Environment 330 Include Configuration Objects in Import of Entire Content Store 333 Testing Deployed Applications 334 Upgrade Report Specifications 334 Chapter 22:Administering Packages 337 Data Trees 337 Configure or Reconfigure a Package 338 Set Permissions for Package Configuration 338 Remove a Package Configuration 339 Chapter 23:Managing User Profiles 341 Edit the Default User Profile 341 Delete a User Profile 342 Copy a User Profile 342 View or Change a User Profile 343 Chapter 24:Administering Microsoft Office Documents 345 Deploying IBMCognos 8 Go!Office Client 345 Set Macro Security Level for Microsoft Office XP 346 Install the CA Certificate for the HTTPS Interface to Series 7 PowerPlay 346 Download a Microsoft Office Document 346 Administration and Security Guide 9 Table of Contents Part 6:Report,Agent,and Metric Administration Chapter 25:Reports and Cubes 349 View,Run,or Open a Report 350 Set Default Report Options 350 Set Report Options for the Current Run 352 Set Advanced Report Options for the Current Run 353 Create a Report 355 Creating a Query Studio Report Without Using Actual Data 356 Create a Report View 356 View Lineage Information for a Data Item 357 Access the IBMWebSphere Business Glossary 358 Edit a Report 358 Report Formats 359 HTML Formats 360 XML Format 360 PDF Format 360 Excel Formats 361 CSV Format 362 Report Languages 362 Add Multilingual Properties 363 Specify the Default Language for a Report 364 Specify the Default Prompt Values for a Report 364 Save Report Output 365 View Report Output Versions 366 Specify How Long to Keep Report Output 367 Enable an Alert List for a Report 367 Add Yourself to or Remove Yourself from the Alert List for a Report 368 Watch Rules in Saved Reports 368 Enable Watch Rules for a Report 369 Create a Watch Rule for a Report 369 Modify or Delete a Watch Rule in Cognos Viewer 370 Enable Comments in Saved Output Versions 371 Add Comments to a Report Version in Cognos Viewer 372 View,Modify or Delete Comments in Cognos Viewer 372 Disable Selection-based Interactivity 373 Exclude Blank Pages in PDF Reports 374 Distributing Reports 374 Saving a Report 375 Sending a Report by Email 375 Sending a Report to your Mobile Device 375 Printing a Report 375 Distributing a Report by Bursting 376 Create Distribution Lists and Contacts 376 Drilling to View Related Data 377 Drill Up or Drill Down 378 Drill Through to Another Target 378 Drill Through to Another Target Passing Multiple Values 379 10 Table of Contents Data Sources With Named Sets May Produce Unpredictable Results 380 Series 7 Reports in IBMCognos Connection 380 Series 7 PowerPlay Reports and Cubes 381 Single Signon 382 Run or Open a Series 7 PowerPlay Report 382 Change the Defaults for a Series 7 PowerPlay Report 383 Open a Series 7 Cube 384 Multilingual Properties for Series 7 Reports and Cubes 384 Chapter 26:Agents 385 Run an Agent 385 Change Default Agent Properties 386 Create an Agent View 387 Open or Create an Agent from IBMCognos Connection 387 Enable an Alert List for an Agent 388 Add Yourself to or Remove Yourself from an Alert List for an Agent 388 Remove All Users from the Alert List for an Agent 389 Receive News Item Headlines 389 View the Most Recent Event List 389 Chapter 27:Metric Studio Metrics 393 Create a Metric Package 393 Change the Default Action for Packages 394 Run a Metric Task 394 Delete a Metric Task 395 Modify a Metric Task 395 Metric Import Tasks 396 Create New Metric Import Task 396 Edit Metric Import Task Properties 397 Metric Maintenance Tasks 397 New Metric Maintenance 398 Edit Metric Maintenance Properties 398 Metric Export Tasks 399 Change Metric Export Properties 401 Chapter 28:Drill-through Access 403 Understanding Drill-through Concepts 404 Drill-through Paths 404 Selection Contexts 404 Drilling Through to Different Report Formats 405 Drilling Through Between Packages 405 Bookmark References 406 Members and Values 406 Member Unique Names 407 Conformed Dimensions 408 Business Keys 409 Scope 409 Mapped Parameters 409 Drilling Through on Dates Between PowerCubes and Relational Packages 410 Set Up Drill-through Access in Packages 410 Administration and Security Guide 11 Table of Contents Set Up Parameters for a Drill-Through Report 413 Set Up Parameters for a Drill-through Target in Analysis Studio 414 Example - Drill Through Between OLAP Reports in the Same Package 415 Example - Drill Through from an OLAP Report to a DMR Report 418 Debugging a Drill-through Definition 421 Access the Drill-through Assistant 422 Example - Debugging a Drill-through Definition 423 Set Up Drill-through Access in a Report 424 Example - Drill Through to a Hidden Report from a Report Studio Report 425 Specify the Drill-through Text 427 Setting Up Drill-through Access from IBMCognos Visualizer 428 Setting Up Drill-through Access from PowerPlay Web 428 Create and Test the Target for a Series 7 Report 428 Part 7:Portal Services Administration Chapter 29:Managing Portlets and Styles 431 Portlets 431 Cognos Portlets 431 Other Portlets 434 Import Portlets 434 Control Access to Portlets 435 Configure the Portlet Cache 436 Modify a Portlet 437 Display the HTML Code From the Source RSS Feed in RSS Viewer and Cognos Navig- ator 438 Styles 439 Add a New Style 440 Control Access to Styles 441 Modify a Style 442 Chapter 30:Deploying Cognos Portlets to Other Portals 443 Deploying Cognos Portlets to WebSphere Portal 5.0,5.1,6.0,and 6.1 443 Install the Portlet Applications File 444 Configure the Portlet Applications 445 Configure the Portlet Cache 446 Customize the Content of Cognos Portlets 446 Deploying Cognos Portlets to SAP Enterprise Portal 6.0,6.4,and 7.0 447 Install the IBMCognos Business Package 448 Edit Properties for the iViews 449 Set the Default iView Content and Appearance for All Users 450 Deploying Cognos Portlets to ALUI 6.1 and 6.5 Portal 451 Start the Remote Server 451 Import the Cognos Portlet Package File 454 Connect to the Remote Server 455 Customize the Content of Cognos Portlets 456 Deploying Cognos Portlets to Microsoft SharePoint Portal Server 2003 and 2007 456 Set up Virtual Directories and Change the Gateway URIs 457 Copy the Cognos Web Parts Resources to the IIS HTTP Root Directory 458 Set Up the IBMCognos Security Trust File 459 12 Table of Contents Modify the.NET Framework web.config File 460 Edit the Cognos Web Parts Catalog Files 463 Restart IIS 463 Add Cognos Web Parts to a SharePoint Page 464 Customize the Content of Cognos Web Parts 465 Migrating Cognos Portlets from IBMCognos ReportNet 1.1 to IBMCognos 8 466 Change the Root Name of File Paths in Cognos Portlets 466 Disable the Transfer of the IBMCognos 8 Passport ID as a URL Parameter 467 Set Portal Services Protocol Scheme 469 Configuring Security for Portal Services 469 Disable Anonymous Access to IBMCognos 8 Components 470 Enable Single Signon Using Shared Secret 470 Enable Single Signon for SAP EP with the SAP Logon Ticket 476 Enable Single Signon for SAP EP with User Mapping 476 Enable Secure Communication Between SAP EP and IBMCognos 8 Components 477 Enable Single Signon for WebSphere Portal Using the Application Server 478 Enable Single Signon for BEA ALUI Portal Using Basic Authentication 478 Enable Single Signon for BEA ALUI Portal Using SiteMinder 479 Part 8:Customization Chapter 31:Customizing the Appearance of IBMCognos 8 481 Making Global Changes to all Components 481 Customizing Styles 481 Rebranding the IBMCognos 8 Interface 483 Changing IBMCognos 8 Fonts 484 Changing the Global IBMCognos 8 Style Sheet 484 Migrating Changes to Future Releases 485 Modifying the Appearance of IBMCognos Connection 486 Example - Customize the Default Welcome Page 486 Example - Change the Branding Details in the IBMCognos Connection Main Header 486 Example - Change the Background Color in the IBMCognos Connection Main Header 487 Example - Change the Portal Graphics 488 Example - Change the Default Fonts for Page Titles and Instructions 488 Modifying the Report Studio Style Sheets 488 Example - Change the Fonts Used in Report Studio 489 Example - Change the Colors Used in Report Studio Menus 489 Example - Change the Report Studio Graphics 490 Modifying the Query Studio Style Sheets 490 Example - Change the Colors Used in Query Studio Menus 491 Example - Change the Query Studio Graphics 491 Customize the Query Studio Toolbar and Menus 492 Modifying the Appearance of Cognos Viewer 494 Modifying the Cognos Viewer Style Sheets 495 Example - Change the Language of the Cognos Viewer User Interface 495 Modifying the Prompt Page Style Sheets 496 Adding Custom Report Templates for Report Studio 497 Create a Report Specification for a Custom Report Template 497 Add a Custom Report Template to the templates.xml File 499 Provide an Icon for the Custom Report Template 499 Administration and Security Guide 13 Table of Contents Add the Custom Template Information to the Resources.xml File 499 Chapter 32:Customizing the Functionality of IBMCognos 8 503 Upgrade the ReportNet system.xml Files to IBMCognos 8 503 Customizing IBMCognos Connection 504 Add or Hide User Interface Elements Based on Groups and Roles 504 Hide and Disable the New URL Button 510 Limit the Number of Entries That Users Can Cut,Copy,and Paste 511 Customizing Object Actions 511 Restrict Content Browsing 515 Implementing a Custom Welcome Page 516 Customize Report Output Formats in IBMCognos Connection and Cognos Viewer 519 Configure the Document Lookup Table 521 Start Query Studio in Preview Mode 522 Customizing Data Formats for Query Studio 522 Modify the cogformat.xml File 523 Change the Order of Data Formats 523 Change the Text Strings 524 Remove Data Formats 525 Add a Data Format to a Locale 526 Add Data Formats for a New Locale 527 Change the Default Query Studio Template 527 Modify Properties for the CSV Output Format 528 CSV Properties and Values 529 Supported Encoding Values 530 Modify Properties for the Batch Report Service and Report Service 531 Batch Report Service and Report Service Properties and Values 531 Customize Error-Handling on the SMTP Mail Server 532 Disable Report Attachments in Email Messages 536 Show Attachments in IBMLotus Notes 536 Disable Support for Trigger-based Scheduling 537 Set Up a Trigger Occurrence on a Server 537 Change the Suppression Functionality in Analysis Studio 539 Part 9:Troubleshooting Chapter 33:Troubleshooting Resources 541 Error Messages 541 Log Files 541 Core Dump Files 544 Metric Dump File 545 Windows Event Viewer 546 Samples 546 Example - Testing Report Studio 547 View the Report Definition in Query Studio 547 Call IBMCognos Resource Center 547 IBMCognos Diagnostic Tools 548 Chapter 34:Problems Using Documentation 549 Problems When Printing a PDF Manual 549 14 Table of Contents Unable to Launch a Web Browser When Accessing Help 549 Text Does Not Appear Properly in Quick Tours 550 Chapter 35:Installation and Configuration Problems 551 Problems Starting IBMCognos 8 551 CFG-ERR-0106 Error When Starting the IBMCognos 8 Service in IBMCognos Configur- ation 551 Cryptographic Error When Starting IBMCognos 8 553 Unable to Start the IBMCognos 8 Service Because the Port is Used by Another Process 553 IBMCognos 8 Service Does Not Start or Fails After Starting 554 IBMCognos 8 Server Fails to Start and Gives No Error Message 554 IBMCognos BI Server Not Available When Starting IBMCognos 8 555 Cannot Log On to a Namespace When Using IBMCognos Connection 558 IBMCognos 8 Services Fail to Restart After a Network Outage 559 No Warning That Installing a Later Version of IBMCognos 8 Will Automatically Update the Earlier Version of the Content Store 559 Download of Resource Fails 559 DB2 Returns SQL1224N Error When Connecting from AIX 560 Content Manager Error When Starting IBMCognos 8 560 DPR-ERR-2014 Error Appears in Log File on Content Manager Computer 560 Non-ASCII Characters in Installation Directory Cause Run-time Errors 560 Cannot Open an MS Cube or PowerCube 561 Cannot Open an OLAP Data Source 562 The Page Cannot Be Found When Starting IBMCognos 8 in Windows 2003 562 The Page Is Not Shown When Opening a Portal After Installing IBMCognos 8 562 DPR-ERR-2058 Error Appears in Web Browser When Starting IBMCognos 8 562 EBA-090034 Error When Starting WebLogic 8 564 Report Studio Does Not Start 565 DPR-ERR-2022 Error Appears in Web Browser When Starting IBMCognos 8 565 Unable to Download the cognos.xts File 565 Application Server Startup Script Fails 566 Issues with IBMWebSphere 6.0 on AIX 5.3 566 IBMCognos 8 Running under WebLogic Application Server on AIX Fails 566 Deploying IBMCognos 8 to an Oracle Application Server or IBMWebSphere Application Server Fails 566 Microsoft Excel 2000 Multipage Report Type Does Not Work 567 Unable to Deserialize Context Attribute Error When Deploying the p2pd.war File to WebLogic 567 Error Appears After Upgrading IBMCognos 8 on a WebLogic Application Server 568 Chinese,Japanese,or Korean Characters Are Different After Upgrade 568 Accented or Double-Byte Characters May Not Display Correctly When Installing IBM Cognos 8 on Linux 569 RSV-SRV-0066 A soap fault has been returned or RQP-DEF-0114 The user cancelled the request Errors Appear in High User Load Environments 569 Problems Configuring IBMCognos 8 569 Configuration Tool cogconfig.sh Return Values Are Not Compliant with Conventional UNIX Return Values 569 Run Database Cleanup Scripts 570 Error Trying to Encrypt Information When Saving Your Configuration 572 Administration and Security Guide 15 Table of Contents Problems Generating Cryptographic Keys in IBMCognos Configuration 572 CAM-CRP-1315 Error When Saving Configuration 573 Manually Changing the Installation Directory Name Affects Installations Running Under an Application Server 573 Configuration Data is Locked by Another Instance of IBMCognos Configuration 574 Unable to Exit a Tab Sequence When Using Keyboard-only Navigation in IBMCognos Configuration 574 Unable to Save Your Configuration 574 Java Error When Starting IBMCognos Configuration 575 Cryptographic Error When Starting IBMCognos Configuration 575 Current Configuration Settings Are Not Applied to Your Computer 576 CM-CFG-029 Error When Trying to Save a Configuration That Specifies a SQL Server Content Store 576 DPR-ERR-2079 When Content Manager Configured For Failover 576 Importing a Large Content Store in Solaris using JRE 1.5 Fails 577 Users are Prompted for Active Directory Credentials 577 Font on UNIX Not Found When Starting IBMCognos Configuration 577 Unable to Load Essbase/DB2 OLAP Library in Framework Manager 578 Group Membership is Missing From Active Directory Namespace 578 Deploying IBMCognos 8 to an Oracle Application Server or IBMWebSphere Application Server 579 Errors Displayed Deploying to Oracle 10G Application Server 580 Page Cannot be Found Error Running Reports using IBMCognos 8 Go!Office 580 Error Initializing Oracle Content Store After Upgrade from ReportNet 580 CGI Timeout Error While Connected to IBMCognos 8 Components Through a Web Browser 581 Servlet Class Fails to Load in WebLogic 581 Desktop Icons or IBMCognos Configuration Window Flicker on Windows 582 Chapter 36:Security Problems 583 Problems Setting Up Security 583 Access to Entries is Denied During Deployment 583 Prompt to Change Passwords When Logging On to an Active Directory Security Source 583 Unable to Log On 583 Certificate Authority Error When Logging On to IBMCognos Connection 584 HTTPS DRP-ERR-2068 Error In Log File When No Error is Reported During a Switch to HTTPS 584 Entries Do Not Appear in IBMCognos Connection for a Member of a Newly Created Group 584 Problems Logging On to Cognos Portlets 585 Existing Passwords May not Work in an SAP Namespace 586 Users Are Repeatedly Prompted for Credentials When Trying to Log On to an SAP Namespace 587 Problems Using Authentication Providers 587 CAM-AAA-0096 Unable to Authenticate User When Using an IBMCognos Series 7 Namespace 587 Expired Password Error Appears When Using Active Directory Server 588 Single Signon Is Not Working When Using Active Directory Server 588 Unable to Authenticate User for Cognos Portlets 589 16 Table of Contents Unable to Identify SAP Permissions Required 589 Unable to Access IBMCognos Administration When an NTLMNamespace Is Used and Single Signon Is Enabled 589 Unable to Automatically (by SSO) Connect to an SAP BWData Source Although it Is Configured to Use an External SAP Namespace for Authentication 590 Chapter 37:Report and Server Administration Problems 593 Database Connection Problems 593 Unable to Select ODBC as the Type of Data Source Connection 593 Cannot Connect to an SQL Server Database Using an OLE DB Connection 594 Intermittent Problems Connecting to a SQL Server Database 594 Cannot Access IBMCognos Series 7 Reports from IBMCognos Connection 594 Series 7 Namespaces Do Not Initialize When Services are Started 595 Content Manager Connection Problem in Oracle (Error CM-CFG-5036) 595 Cannot Connect to an OLAP Data Source 596 Error When Creating a Data Source Connection to a PowerCube 596 Not Yet Optimized IBMCognos PowerCubes May Open Slowly in IBMCognos 8 596 Other Administration Problems 597 Restarting Servers After Solving Content Store Problems 597 An Update or Delete Request Fails 598 BI Bus Server Processes Remain in Memory After a Shutdown 598 Higher Logging Levels Negatively Affect Performance 598 Problems Accessing Cognos Portlets 599 Unable to Edit Object Properties in ALUI 6.1 and 6.5 Portal 600 Only the Administrator Can See Cognos Portlets 601 Locale Mismatch in Cognos Navigator Portlet 601 Properties Pages in Cognos Portlets Are not Displayed Properly 602 Problems Displaying HTML Reports in a Multi-tab Dashboard 602 Unable to Identify SAP BWVersion and Corrections 603 SBW-ERR-0020 Error When Running Reports Based on SAP BWData Sources 603 Links to Referenced Content Objects are Broken Following Deployment 603 Table or View Does not Exist for Sample Database 604 CNC-ASV-0007 Error When Calling a Report Trigger From a Web Service Task 604 Chapter 38:Problems When Using Framework Manager 605 Measures Not Added After Initial Import in an Upgraded Model 605 Unable to View the Result Set of a Stored Procedure 605 Unable to Compare Two CLOBs in Oracle 605 An Out of Memory Error with ERWin Imported Metadata 606 Calculation Fails Testing 606 Framework Manager Cannot Access the Gateway URI 606 Object Names Appear in the Wrong Language 607 Full Outer Joins in Oracle Return Incorrect Results 607 Error When Testing Query Subjects in a Model Imported from Teradata 607 Error for Type-In SQL Query Subject 607 Function Name Not Recognized 608 QE-DEF-0259 Error 608 QE-DEF-0260 Parsing Error 609 Externalized Key Figures Dimension Retains Old Prompt Value 609 Older IBMCognos 8 Models Display Level Object Security 609 Administration and Security Guide 17 Table of Contents Exporting a Framework Manager Model to a CWMFile Fails With Error MILOG.TXT was not found 609 Difference in SQL for Inner Joins After Upgrading to IBMCognos 8.3 610 Publishing a Large Package Results in a CCLOutOfMemory Error 610 Full Outer Joins Not Sent to Oracle 9i and 10GR1 610 Review Governors When Upgrading Models 610 File Creation from Framework Manager fails with Error QE-DEF-0178 611 Chapter 39:Problems When Using Transformer 613 Known Issues When Modeling in IBMCognos Transformer 613 BAPI Error Occurs After the Prompt Specification File Edited Manually 613 Unable to Access an IQD Data Source using a Sybase Database Connection 613 Importing Time Dimensions from IBMCognos 8 Packages Not Supported 614 Data in Multiple Languages Does Not Display Properly 614 Unable to Use an IQD Created in Framework Manager That Contains an Oracle Stored Procedure 614 Preventing Errors When Model Calculations Use Double Quotation Marks 615 Framework Manager and Transformer may Display Different Locale Session Parameters for Some Languages 615 Regular Columns cannot be Converted to Calculated Columns and Vice Versa 615 Transformer Takes a Long Time to Retrieve Data from an SAP-based Data Source 616 Categories Missing When Creating a Transformer Model Based on an SAP Query Containing a Manually Created SAP Structure 616 Error Occurs When Creating a PowerCube Containing an SAP Unbalanced Hierarchy 616 Rebuilding a PowerCube Soon After Publishing Produces a TR0787 Error 616 Known Issues Using Cubes in the IBMCognos 8 Studios 617 Not Yet Optimized IBMCognos PowerCubes May Open Slowly in IBMCognos 8 617 Analysis Studio Shows the Wrong Currency Symbol 618 Changes to Decimals in Currency Formats 619 Ragged or Unbalanced Hierarchies Result in Unexpected Behavior 619 Error Opening Saved Reports After PowerCube Refresh 620 Unable to Open Sample Model,Great Outdoors Sales.mdl,and Generate Cubes 620 Data Records that Support External Rollup Measure Data are Not Always Created 621 Chapter 40:Problems Authoring Reports 623 Problems Creating Reports 623 Chart Labels Overwrite One Another 623 Chart Shows Only Every Second Label 623 Chart Gradient Backgrounds Appear Gray 623 Division by Zero Operation Appears Differently in Lists and Crosstabs 624 Application Error Appears When Upgrading a Report 624 Nested List Report Containing a Data Item That is Grouped More Than Once Does Not Run After Upgrade 624 Background Color in Template Does not Appear 625 Subtotals in Grouped Lists 625 Metadata Change in Essbase Not Reflected in Reports and in the Studios 625 Relationships Not Maintained in a Report With Overlapping Set Levels 625 Creating Sections on Reports That Access SAP BWData Sources 626 Error Characters (--) Appear in Reports 626 Descendants Function Unreliable with Sets 627 18 Table of Contents Columns,Rows,or Data Disappear With SSAS 2005 Cubes 627 Unexpected Cell Formatting in Reports 628 Report Differences Between TM1 Executive Viewer and IBMCognos 8 with TM1 Data Sources 628 Order of Metadata Tree Differs for TM1 Data Sources 628 Problems Calculating Data 629 Count Summaries in Query Calculations Include Nulls with SAP BWData Sources 629 Unexpected Summary Values in Nested Sets 629 Incorrect Results in Summaries When Using OLAP Data Sources 630 Incorrect Results with IBMCognos PowerCubes and Time Measures 631 Problems Filtering Data 631 HRESULT= DB_E_CANTCONVERTVALUEError When Filtering on a _make_timestamp Column 631 Problems Distributing Reports 632 A Report Link in an Email Notification Does Not Work 632 Report Contains No Data 632 Hyperlinks in Email Messages Are Stripped Out When the Agent is Saved 632 Errors When Running Web Service Tasks 633 Cannot Call the SDK from Event Studio 633 Saving a Report Takes a Long Time 633 Chapter 41:Problems Running,Viewing,or Printing Reports and Analyses 635 Problems Running Reports and Analyses 635 Summaries in Report Do not Correspond to the Visible Members 635 Unexpected Results for Analysis Studio Reports Using Suppression and Nested Rows 636 Unexpected Results May Occur When Using Items from the Same Hierarchy on Multiple Crosstab Edges 637 Defining Languages for OLAP Data Sources 637 Crosstab Shows Percentage But Chart Shows Values 637 Cube Refresh in Data Analysis Studio Uses UTC/GMT Time 637 Cannot Drill when Caption Represents a Blank or a Zero-length String 638 DPR-ERR-2082 The Complete Error Has Been Logged by CAF With SecureErrorID 638 Query Studio Does Not Generate a SELECT DISTINCT statement if a Column is Aliased Without Using the Actual Column Name 638 Cannot Find the Database in the Content Store (Error QE-DEF-0288) 638 Parse Errors When Opening or Running an Upgraded Report 639 Overflow Error Occurs When a Value in a Crosstab Is More Than 19 Characters 639 IBMCognos 8 Runs Out of TEMP Space 639 A Report Does Not Run as Expected 639 Performance Issues when Showing Multiple Attributes Using Dimensionally Modeled Relational Data Sources 640 Analysis Studio Shows the Wrong Currency Symbol 640 Error Occurs in Japanese Internet Explorer 7 When Running an Excel Report in Analysis Studio 641 The ORA-00907 Error Appears When Running a Report 641 Drilling Through to IBMCognos 8 froman IBMCognos Series 7 Product Results in Firewall Error 641 Scheduled Reports Fail 642 The Table or View Was Not Found in the Dictionary 642 Administration and Security Guide 19 Table of Contents Mixed Languages Are Displayed in IBMCognos Connection When Using Samples 642 Unable to Select Multiple Report Formats When Running a Report 643 A Report Does Not Run as Scheduled 643 A Report or Analysis Does Not Run Because of Missing Items 643 Cannot View Burst Report 644 PCA-ERR-0057 Recursive Evaluation Error 645 Arithmetic Overflow Error When Running a Report in PDF Format 645 Performance Problems When Running Reports 645 CGI Timeout Error While Transferring Data to IBMCognos 8 Components 645 The BAP-ERR-0002 BAPI Error 646 The Out of Memory Error Appears in HP-UX 646 A Query Is Slow When Filtering Non-ASCII Text 646 Query Studio Output Takes a Long Time to Run 647 Problems Viewing Reports 647 A Report Upgraded from ReportNet Does Not Retain its Original Look 647 Drill-through Links Not Active in the Safari Browser 647 A Running Total in Grouped Reports Gives Unexpected Results 647 The Page Cannot Be Found Error Appears When Viewing Report Outputs from Email Links 648 Non-English Characters Appear as Placeholders 648 Cannot Drill Between PowerCubes Because MUNs Do Not Match 648 Unexpected or Empty Results When Drilling Through 649 Cannot Drill From a Relational Source to a Cube 650 Charts Do Not Appear in HTML reports 650 Portal Problems 651 Cannot Connect to a SQL Server Database Using an ODBC Driver 651 The My Folders Tab Does Not Appear After Logging On to IBMCognos Connection 652 Icon Graphics Are Not Working in Portlets 652 Styles Used in the Previous Installation Still Appear 652 Unable to Click Links 653 Missing Images in a PDF Report 653 Charts in PDF Output Show Unexpected Results 653 Problems Printing Reports 654 Unable to Print PDF Reports 654 A Printed HTML Report is Unsatisfactory 655 Chapter 42:Problems When Using Map Manager 657 Problems Importing Files 657 Error Importing Translated Text File 657 Chapter 43:Problems With Metrics 659 Metric Studio Log Files 659 Metric Studio Support Bundle 660 Known Issues When Using Metric Studio 660 Metric Studio Reports Fail Because of an Oracle Internal Error 660 Metric Studio Errors Occur When Loading Data into an Oracle Database 661 Oracle Errors Occur When Using the cmm_uninstall Script 661 Error When Attempting to Run Metric Studio on SQL Server 2005 661 Data froma Relational Database Source or a Flat File Data Source Does Not Appear 662 A Metric Maintenance Task Fails to Run 663 20 Table of Contents You Do Not Have Permission to Access This Metric Package.Contact Your System Administrator.663 Failed to Check the Metrics Store Install Status Error When Using DB2 8.2.3 663 Errors Occur When Importing Tab-delimited Files into a DB2 Metric Store 663 Required User Permissions for the Metric Store Database (MS SQL Server) 664 Error When Viewing a History Chart or Diagram on a DB2 Server with the 8.2 JDBC Driver 664 Oracle 9.2 Package Initialization Error if NLS_LANG Environment Variable is Not Set Appropriately Before Starting Up IBMCognos 8 Tomcat Server 664 Known Issues When Using Metric Designer 664 Adding Multiple IQD Files to an Import Source 664 Previewed Scorecard Hierarchy Shows Blanks 665 Chapter 44:Troubleshooting IBMCognos 8 for Microsoft Office and the Report Data Service 667 Configuration Issues 667 The IBMCognos 8 for Microsoft Office Interface Fails to Initialize in Microsoft Office 667 IBMCognos 8 Go!Office Does Not Start in Microsoft Word 667 IBMCognos 8 for Microsoft Office Fails to Initialize in Microsoft Internet Explorer 668 bo:heap Buffer Overflow Error 668 Microsoft Office Does Not Open a Microsoft Office Document Published from IBM Cognos 8 for Microsoft Office 668 Unable to Open Published Microsoft Office Documents fromIBMCognos Connection 669 Error Messages,the.NET shortcut,or the.NET Console Are Not in the Language of the .NET Framework 2.0 That Was Installed 670 Workbook Closes Unexpectedly if the Name Contains a Square Bracket 670 The server committed a protocol violation.Section=ResponseHeader Detail=CR must be followed by LF 670 Reports Unavailable in IBMCognos Connection Jobs after Using Save As Command in Report Studio 671 Unable to Correctly Display East Asian Characters 671 The Content of the Cell-based Report Shows#NAME?671 Processing Issues 672 Cannot Render this Report 672 RDS Data Limit Exceeded When Importing from Large Report Outputs 672 RDS Server Unavailable 672 Imported Reports Are Missing Charts or Images 673 #ERROR Appears in Cells with Multiple Images in a Cell (Excel Only) 673 The Dispatcher Is Unable to Process the Request.The Request Is Directed to an Unknown Service Name:Content 673 Report Content is Not Imported 673 Incorrect Format for the Prompt Value in Prompted Reports 674 DPR-ERR-2079 Firewall Security Rejection 674 This itemcannot be expanded.Microsoft Excel has reached the maximumnumber of rows or columns for this worksheet 675 Prompted to Log on for Each Imported Report 675 Object reference not set to an instance of an object 675 Error 0:RSV-BBP-0027 The Secondary Request Failed 675 Security Issues 675 IBMCognos 8 for Microsoft Office Unable to Create Trust Relationship 676 Administration and Security Guide 21 Table of Contents Unable to View Reports After Clicking View Report 676 Report Data Service (RDS) Numbered Error Messages 676 RDS-ERR-1000 Report Data Service Could Not Process the Response from the Content Provider 676 RDS-ERR-1001 The PowerPlay Report Name Could Not Be Run.The Expected Response Was Not Returned by PowerPlay 677 RDS-ERR-1004 A Connection Could Not Be Established with IBMCognos 8 677 RDS-ERR-1005 The Logon Requirements for IBMCognos 8 Could Not Be Obtained.You May Already Be Logged into this Namespace,or the Target Namespace Does Not Exist 678 RDS-ERR-1012 IBMCognos Content Service was Unable to Discover the Content Pro- viders 678 RDS-ERR-1013 Report Data Service Was Unable to Query Content Manager 678 RDS-ERR-1014 Report Data Service Was Unable to Create the Document Object Object Name 678 RDS-ERR-1015 Report Data Service Was Unable to Create a NewDocument Version 678 RDS-ERR-1016 Report Data Service Was Unable to Create a New Document Content Object 678 RDS-ERR-1018 The IBMCognos 8 Report Name Could Not Be Run.The Expected Response Was Not Returned by IBMCognos 8 679 RDS-ERR-1019 IBMCognos Content Service Was Unable to Retrieve the Portal Information from IBMCognos Connection 679 RDS-ERR-1020 The Currently Provided Credentials are Invalid.Please Provide the Logon Credentials 679 RDS-ERR-1021 The IBMCognos 8 Report Name Could Not be Run Because it Contains Unanswered Prompts.Please Provide the Prompt Answers,and Run the Report Again 680 RDS-ERR-1022 The Request Received by Report Data Service Is Not Valid 680 RDS-ERR-1023 The Report Name Could Not Be Run Because It Exceeds the Report Data Service Data Size Limit Set by the Administrator 680 RDS-ERR-1027 The Encoding for the PowerPlay Server Name Could Not Be Determined. ISO-8859-1 Will Be Used as the Encoding 680 RDS-ERR-1030 A Security Error Occurred While Trying to Establish a Connection with Name 680 RDS-ERR-1031 Report Data Service was unable to retrieve the metadata for Name 680 RDS-ERR-1033 Report Data Service Was Unable to Create the Report View Name 680 RDS-ERR-1034 The Report Specification for Name Could Not Be Retrieved From IBM Cognos 8 681 RDS-ERR-1039 The Request Could Not Be Cancelled.The Request is No Longer Run- ning 681 RDS-ERR-1040 The Conversation With Conversation ID Has Been Cancelled 681 RDS-ERR-1044 The Output for the Requested Version for Object NameCould Not be Retrieved 681 RDS-ERR-1045 LayoutDataXML Output Was Not Generated for the Requested Version for Object [Name] 682 IBMCognos 8 Go!Office Numbered Error Messages 682 COC-ERR-1003 Failed to Create Slide 682 COC-ERR-1302 There Is No Data Source Available 682 COC-ERR-2005 The Import Failed 682 22 Table of Contents COC-ERR-2006 Failed to Load the Portal Tree:Name 683 COC-ERR-2012 Failed to Render the List 683 COC-ERR-2013 This Is an Unsupported Office Application 683 COC-ERR-2014 Refresh Failed 683 COC-ERR-2015 Failed to Open the Import Wizard Dialog 683 COC-ERR-2019 Failed to Refresh an Image 684 COC-ERR-2301 Logon Failed 684 COC-ERR-2303 This Report Is Not Valid for Rendering 684 COC-ERR-2304 This Is a Prompted Report.It Is Not Currently Supported 684 COC-ERR-2305 Microsoft Excel Returned an Error.Ensure That Microsoft Excel is Not in Edit Mode,Then Try Again 685 COC-ERR-2308 Report Specification is Empty 685 COC-ERR-2603 You Must Add a Slide to the Presentation Before Importing Any Con- tent 685 COC-ERR-2604 Cannot Render Empty List Object 685 COC-ERR-2607 Microsoft Office Message 685 COC-ERR-2609 The Custom property"Property_Name"does not exist 685 IBMCognos 8 for Microsoft Office Numbered Error Messages 686 COI-ERR-2002 Block type is not valid 686 COI-ERR-2005 This Version of Microsoft Office Is Not Supported 686 COI-ERR-2006 This Microsoft Office Product Is Not Supported 686 COI-ERR-2008 Unable to Retrieve from Resources.Tried'{0}'686 COI-ERR-2009 Unable to Perform This Operation Because Microsoft Excel is in Edit Mode 687 COI-ERR-2010 The Name {0} is Not Valid.A Name Must Not Contain Both a Quote (") Character and an Apostrophe (') Character 687 COI-ERR-2011 The server did not return the expected response.Check that the gateway is valid.687 IBMCognos 8 BI Analysis for Microsoft Excel Numbered Error Messages 687 COR-ERR-2002 Block Type is Not Valid 687 COR-ERR-2005 Block Specification is Not Valid or Missing"{0}"687 COR-ERR-2006 Unexpected Type:[Stacked Block] 687 COR-ERR-2007 Error Retrieving from Resources.Tried'{0}'688 COR-ERR-2009 Name Formula is Not Valid 688 COR-ERR-2010 Formula is Not Valid 688 COR-ERR-2011 Prompted Metadata Is Not Supported 688 COR-ERR-2012 Unable to Load Metadata 688 COR-ERR-2013 Exploration Cannot Be Converted to Formula Based Because at Least One Context Item Contains a Selection 688 COR-ERR-2014 Due to Excel worksheet limitations the results may be truncated 688 COR-ERR-2015 Exploration Cannot Be Converted to Formula Based Because There is a Default Measure and a Measure on Either Rows or Columns 689 COR-ERR-2016 Unable to Retrieve Package Name 689 COR-ERR-2017 The Current Selection Did Not Return Any Data 689 Part 10:Reference Material Appendix A:Round Trip Safety Configuration of Shift-JIS Characters 691 Example:Safe Conversion of Shift-JIS 692 The Round Trip Safety Configuration Utility 692 Administration and Security Guide 23 Table of Contents Specify Conversions 693 Specify Substitutions 694 Apply the Conversions and Substitutions 695 Restore the Default Conversion Settings 695 Specify Conversions for Series 7 PowerPlay Web Reports 696 Appendix B:Initial Access Permissions 697 Content Manager Hierarchy of Objects 697 Appendix C:Localization of Samples Databases 709 One Column Per Language 709 One Row Per Language 710 Transliterations and Multiscript Extensions 711 Appendix D:User Interface Elements Reference List 713 Elements You Can Hide 713 Elements You Can Add 720 Appendix E:User Reference Help for Portal Services 723 Cognos Navigator 723 Cognos Search 725 Cognos Viewer (IBMCognos Connection) 727 Cognos Viewer 730 Cognos Extended Applications 732 Metric List 732 Metric History Chart 734 Metrics Impact Diagram 736 Metrics Custom Diagram 737 Bookmarks Viewer 738 HTML Viewer 739 Image Viewer 740 RSS Viewer 741 HTML Source 743 Multi-page 744 Appendix F:Schema for Data Source Commands 747 commandBlock 748 Child Elements of commandBlock Element 748 Parent Elements of commandBlock Element 748 commands 748 Child Elements of commands Element 748 Parent Elements of commands Element 748 sessionStartCommand 748 Child Elements of sessionStartCommand Element 749 Parent Elements of sessionStartCommand Element 749 sessionEndCommand 749 Child Elements of sessionEndCommand Element 749 Parent Elements of sessionEndCommand Element 749 arguments 750 Child Elements of arguments Element 750 Parent Elements of arguments Element 750 24 Table of Contents argument 750 Child Elements of argument Element 750 Parent Elements of argument Element 750 setCommand 751 sqlCommand 751 Child Elements of sqlCommand Element 751 Parent Elements of sqlCommand Element 751 sql 751 Child Elements of sql Element 751 Parent Elements of sql Element 751 name 751 Child Elements of name Element 752 Parent Elements of name Element 752 value 752 Child Elements of value Element 752 Parent Elements of value Element 753 Appendix G:Data Schema for Log Messages 755 Table Definitions 755 Table Interactions 756 COGIPF_ACTION Table 758 COGIPF_USERLOGON Table 759 COGIPF_NATIVEQUERY Table 760 COGIPF_PARAMETER Table 761 COGIPF_RUNJOB Table 762 COGIPF_RUNJOBSTEP Table 763 COGIPF_RUNREPORT Table 765 COGIPF_EDITQUERY Table 767 COGIPF_VIEWREPORT Table 769 COGIPF_AGENTBUILD Table 770 COGIPF_AGENTRUN Table 772 COGIPF_THRESHOLD_VIOLATIONS Table 774 Appendix H:Performing Tasks in IBMCognos 8 Using URLs 779 CGI Program and Alternative Gateways 779 URL Methods 780 Parameterized URL Method 780 cognosLaunch Method 781 Common Optional Parameters 782 URL Validation 782 Starting IBMCognos 8 Components 783 Start Parameters 784 Starting Report Studio 784 Starting Query Studio 785 Starting Analysis Studio 787 Starting Metric Studio 788 Starting Event Studio 788 Starting Cognos Viewer 789 Starting IBMCognos 8 Components in a Specified Browser Window 791 Access an IBMCognos Connection Page 793 Administration and Security Guide 25 Table of Contents Preparing a Page for Standalone Access 794 Using Search Paths and Page IDs 794 Using a Page ID Instead of the Object Search Path 795 Glossary 797 Index 803 26 Table of Contents Introduction This document is intended for use with IBMCognos 8.IBMCognos 8 is a Web product with integrated reporting,analysis,scorecarding,and event management features. This document contains step-by-step procedures and background information to help you administer IBMCognos 8. Audience To use this guide,you should be familiar with reporting and security concepts,and have experience using a Web browser. Related Documentation Our documentation includes user guides,getting started guides,newfeatures guides,readmes,and other materials to meet the needs of our varied audience.The following documents contain related information and may be referred to in this document. Note:For online users of this document,a Web page such as The page cannot be found may appear when clicking individual links in the following table.Documents are made available for your par- ticular installation and translation configuration.If a link is unavailable,you can access the document on the IBMCognos Resource Center (). DescriptionDocument Using IBMCognos Connection to publish,find, manage,organize,and viewIBMCognos content, such as scorecards,reports,analyses,and agents IBMCognos Connection User Guide Creating self-service business intelligence reportsQuery Studio User Guide Authoring reports that analyze corporate data according to specific needs Report Studio Professional Authoring User Guide Authoring financial reports that analyze corporate data according to specific needs Report Studio Express Authoring User Guide Creating and managing agents that monitor data and performtasks when the data meets predefined thresholds Event Studio User Guide Creating and publishing models using Framework Manager Framework Manager User Guide Licensed Materials – Property of IBM 27© Copyright IBMCorp.2005,2009. DescriptionDocument Administering PowerPlay servers and deploying cubes and reports to PowerPlay users in Windows environments or on the Web IBMCognos PowerPlay Enterprise Server Guide Viewing,exploring,formatting,and distributing PowerPlay reports using a Web browser IBMCognos PowerPlay Web User Guide Viewing,finding,organizing,and sharing inform- ation in Upfront,the customizable interface used to publish IBMCognos reports to the Web IBMCognos Web Portal User Guide Authoring scorecard applications and monitoring the metrics within them Metric Studio User Guide Using IBMCognos 8 Go!Office to retrieve content from IBMCognos reporting products within Microsoft Office IBMCognos 8 Go!Office User Guide Exploring,analyzing,and comparing dimensional data Analysis Studio User Guide Describing features that are new in this releaseIBMCognos 8 New Features Installing and using Map Manager to import and manage maps that are used in map reports Map Manager Installation and User Guide Understanding the IBMCognos 8 architecture, developing installation strategies,including security considerations,and optimizing performance IBMCognos 8 Architecture and Deployment Guide Installing,upgrading,configuring,and testing IBM Cognos 8,changing application servers,and setting up samples IBMCognos 8 Installation and Configuration Guide Creating a custom authentication provider or a trusted signon provider using the Custom Authentication Provider API Custom Authentication Provider Developer Guide Creating and publishing models using the Frame- work Manager API Framework Manager Developer Guide Creating and managing financial reports and administering the database IBMCognos Finance User Guide 28 Introduction DescriptionDocument Using the samples included with the IBMCognos 8 SDK to learn how to automate IBMCognos 8 IBMCognos 8 Software Development Kit Getting Started Moving metadata and applications from IBM Cognos Series 7 to IBMCognos 8 IBMCognos 8 Migration Tools User Guide Installing and Configuring IBMCognos 8 Planning Products IBMCognos 8 Planning Installation and Configuration Guide Finding Information Product documentation is available in online help from the Help menu or button in IBMCognos products. To find the most current product documentation,including all localized documentation and knowledge base materials,access the IBMCognos Resource Center ( data/support/cognos_crc.html). You can also read PDF versions of the product readme files and installation guides directly from IBMCognos product CDs. Using Quick Tours Quick tours are short online tutorials that illustrate key features in IBMCognos product components. To view a quick tour,start IBMCognos Connection and click the Quick Tour link in the lower- right corner of the Welcome page. Getting Help For more information about using this product or for technical assistance,visit the IBMCognos Resource Center ().This site provides information on support,professional services,and education. Printing Copyright Material You can print selected pages,a section,or the whole book.You are granted a non-exclusive,non- transferable license to use,copy,and reproduce the copyright materials,in printed or electronic format,solely for the purpose of operating,maintaining,and providing internal training on IBM Cognos software. Administration and Security Guide 29 Introduction 30 Introduction Chapter 1:What’s New? This section contains a list of new,changed,and deprecated features for this release.It also contains a cumulative list of similar information for previous releases.It will help you plan your upgrade and application deployment strategies and the training requirements for your users. For information about upgrading,see the Installation and Configuration Guide for your product. For information about other new features for this release,see the New Features Guide. For changes to previous versions,see: ● New Features in Version 8.3 ● Changed Features in Version 8.3 To reviewan up-to-date list of environments supported by IBMCognos products,such as operating systems,patches,browsers,Web servers,directory servers,database servers,and application servers, visit the IBMCognos Resource Center Web site ( crc.html). New Features in Version 8.4 Listed below are new features since the last release.Links to directly-related topics are included. Additional Language Support In addition to Japanese,German,and French,the installation documentation and the user interface for the installation programand IBMCognos Configuration are available in the following languages: ● Chinese (simplified) ● Chinese (traditional) ● Korean ● Italian ● Spanish ● Portuguese (Brazilian) You can use the new product languages when personalizing your user interface in IBMCognos 8 (p.262). English product documentation is installed when you install the IBMCognos 8 gateway component. The Installation and Configuration Guide,the Quick Start Installation and Configuration Guide, and the Readme are the exceptions,and are available in all supported languages.To access all other translated documentation,you must install the Supplementary Languages Documentation. Licensed Materials – Property of IBM 31© Copyright IBMCorp.2005,2009. Support for IBMMetadata Workbench as a Lineage Solution You can now configure IBMMetadata Workbench as a lineage solution in IBMCognos 8.For more information,see"View Lineage Information for a Data Item"(p.357). Administrators can configure the lineage solution by specifying the lineage URI in IBMCognos Administration (p.152). Access to IBMWebSphere Business Glossary If you use the IBMWebSphere Business Glossary,you can now access the glossary from Cognos Viewer (p.358). You can configure the IBMWebSphere Business Glossary URL in IBMCognos Administration (p.153). Managing Comments Using Cognos Viewer You can now add user-defined comments to saved HTML,PDF and XML reports using Cognos Viewer. For more information,see (p.371). Adding Application Context to Dynamic SQL An administrator can now define a custom string including application context that is added as a comment marker within SQL generated by the application. For more information,see"Add Application Context to Dynamic SQL"(p.147). Support for New Data Sources You can now use the following data sources in IBMCognos 8: ● IBMCognos Now!Cube (p.158) ● IBMInfosphere Warehouse Cubing Services (p.168) ● TM1 (p.181) ● SQL 2008 Native Client"Microsoft SQL Server Data Sources"(p.173) ● Microsoft Analysis Service 2008"Microsoft Analysis Services Data Sources"(p.169) Support for New Portal Versions IBMCognos 8 Portal Services now provide extended support IBMWebSphere 6.0 and 6.1,and BEA AquaLogic User Interaction 6.5 (ALUI 6.5) For more information,see"Deploying Cognos Portlets to Other Portals"(p.443). 32 Chapter 1:What’s New? Hiding Entries You can hide entries in IBMCognos Connection and IBMCognos Administration,such as reports, packages,pages,folders,jobs,data sources,portlets,and so on.This functionality is most often used with drill-through reports. Hiding an entry does not affect its security policies. For more information,see"Hide an Entry"(p.256). Updating Published PowerCubes You can now use the pcactivate command to make new versions of published PowerCubes available to users. For more information,see"Deploy Updated PowerCubes"(p.163). Object Capabilities You can now specify capabilities for individual packages. For more information,see"Object Capabilities"(p.231). Schedule Credentials When you choose to import schedules in the deployment,you can change the imported schedule credentials to your credentials. For more information,see"Including Schedules"(p.315). Save History Details for Job Steps You can save history details for job steps when the run activity completes successfully. For more information,see"Job Properties"(p.250). Viewing Lineage Information A data item’s lineage information traces the item’s metadata back through the package and the package’s data sources.Viewing lineage information ensures that you add the correct data items to a report. For more information,see"View Lineage Information for a Data Item"(p.357). Enhanced Drill-through Capabilities In earlier versions of IBMCognos 8,model-based drill-through supported only reports created in Analysis Studio,Query Studio,or Report Studio as targets.Other types of drill-through targets are nowsupported.For example,you can drill through to PowerPlay Studio reports saved in the content store,or to a package that contains a PowerCube. In earlier versions of IBMCognos 8,drill-through access required the existence of parameters in the target.IBMCognos 8 now allows dynamic filtering of the data.In cases where more control is needed,you can continue to use the existing parameterized drill-through. Administration and Security Guide 33 Chapter 1:What’s New? You now also can restrict the availability of package drill-through definitions to measures as well as other data when you set the scope. If the source is based on a dimensional package,you can choose what property of the source metadata item to map to the target.For example,you can map the member caption of the source metadata item to a relational value in the target instead of using the business key.For more information,see"Drill-through Access"(p.403). The drill-through assistant contains improved debugging information (p.421). Metric Studio Content in Portal Pages An IBMCognos Connection page or a dashboard can now display metric impact diagrams and custom diagrams.This new content can be added by using the following new portlets: ● Metrics Impact Diagram Use to display impact diagrams associated with a metric. ● Metrics Custom Diagram Use to display custom diagrams associated with a scorecard. For more information,see"Pages and Dashboards"(p.269). Changed Features in Version 8.4 Listed below are changes to features since the last release.Links to directly-related topics are included. Composite Information Server is Replaced By IBMCognos 8 Virtual ViewManager Composite Information Server was available with earlier releases of IBMCognos 8.In the current release,Composite Information Server is replaced by IBMCognos 8 Virtual ViewManager,which is an IBMproprietary product that is based on a new version of Composite Information Server.In this release,the default repository is changed,fromMicrosoft SQL Server to IBMInformix.If you have Composite data sources defined in IBMCognos Connection,you must migrate the existing repository to the new default repository. For more information,see"ODBC Data Sources"(p.176).For more information about migrating the repository,see the IBMCognos 8 Virtual View Manager User Guide. IBMCognos 8 Portal Services Plumtree portal is replaced by BEA AquaLogic User Interaction 6.1 (ALUI 6.1) portal. For more information,see"Deploying Cognos Portlets to ALUI 6.1 and 6.5 Portal"(p.451). 34 Chapter 1:What’s New? Deprecated Features in Version 8.4 A deprecated feature is one that is being replaced by a newer version or a better implementation. The intention is to discontinue the use of the feature and provide recommendations for adapting to this change over multiple releases. Listed below are deprecated features,including links to related topics. IBMCognos 8 PowerCube Connection Utility and cubeswap In the next release of IBMCognos 8 Transformer,the PowerCube Connection Utility and cubeswap will be deprecated.The functionality provided by these utilities is no longer required when using the automated copy and activate options on the Deployment tab of the PowerCube properties dialog box For more information about the PowerCube copy and activate options,see"Deploy Updated PowerCubes"(p.163) or the Transformer User Guide. New Features in Version 8.3 Listed below are new features since the last release.Links to directly-related topics are included. Improved User Interface for Administrators Administrative tasks are now located in one central management interface named IBMCognos Administration.The newinterface will help you make quicker,more informed decisions,and simplify the overall management of the IBMCognos environment. Access to More Documentation You can access supplementary documentation from More Documentation on the Help menu of IBMCognos Connection.This new link opens a dynamic IBMCognos Documentation page that contains one or more readme files and additional guides,depending on the products you have installed and the language in which you installed them. More Granular Administrative Capabilities A more granular approach to securing administrative functions is now available.Administrators can be granted some administrative permissions,but not others.For example,an administrator can have access to tasks associated with managing data sources,but not to the tasks associated with maintaining the security namespaces. For more information,see"Secured Functions and Features"(p.223). Access to SystemStatistics You can view metrics related to different servers,dispatchers,and services.The metrics provide you with insight into the status of the environment. For more information,see"System Performance Metrics"(p.91). Administration and Security Guide 35 Chapter 1:What’s New? Snapshot of SystemHealth You can get a snapshot of the status of all servers,server groups,dispatchers,and services in the IBMCognos topology.All systemmetrics are found on the Systemtab in IBMCognos Administra- tion.When you see statistics in their proper context,you can make better decisions regarding per- formance,scheduling,and capacity planning. For more information,see"System Performance Metrics"(p.91). Managing Queues IBMCognos Administration provides specific views and tools to identify the report,job,or application currently in the queue or being processed.These views also reveal who is running the item,regardless of whether it is a background or interactive task.You can better understand what is happening in your environment and take action to resolve issues.For example,you can cancel a job for a user. For more information,see (p.287) Reducing Deployment Details An administrator can specify the level of deployment details logged to the content store.By default, the deployment history will contain only summarized information.This will save memory space, improve performance,and require less maintenance.For more information,see"Deploying Selected Public Folders and Directory Content"(p.313). You can learn the current status of a deployment by viewing periodic updates in the Monitor Service (p.287). Setting Priority on Schedules You can set a priority,from1 to 5,on a schedule.Setting a priority ensures that if two reports are waiting in the queue to be run,the report with the higher priority is run first.You can override and reset the priority on any schedule. For more information,see"Manage Entry Run Priority"(p.295). Better Control of Interactive Features in Reports You can nowdisable interactive features in addition to drill-up,drill-down,and package drill report options.Administrators can control access to all interactive features,including drill-up and drill- down,package drill,authored drill,Go!Search,and notifications. This feature gives you more control of interactive activities.Hiding these functions may reduce the need for user training in large deployments. The new capabilities are exposed as run options in IBMCognos Connection (p.350). New Sample Audit Reports Sample audit reports have been added for metric threshold exceptions,agents,failed reports,and presentation service. 36 Chapter 1:What’s New? For more information,see"Setting up Logging"(p.79). Publishing and Managing Packages in Non-root Folders You can nowpublish packages fromFramework Manager into any folder in IBMCognos Connec- tion.In previous versions,packages could be published and maintained only in the single root folder.These packages can also be moved from the root folder to any folder in IBMCognos Con- nection.In Framework Manager,any target folder can be used for publishing. For more information,see"Models and Packages"(p.245). Enabling Report Studio Authoring Modes Report Studio now accommodates two distinct types of report authors: ● Express This user can access the Report Studio Professional authoring mode and the Express authoring mode. ● Professional This user can access the Report Studio Express authoring mode for financial report authoring to create and maintain statement style reports.Financial authoring requires many,but not all, of the features that exist in Report Studio and interaction with live data. In IBMCognos Administration,you can restrict users to have access to only the Express authoring mode in Report Studio.For more information,see"Set Access to the Report Studio User Interface Profiles"(p.229). Server Administration Server administration is enhanced with new capabilities.You can now: ● set PDF file character encoding,font embedding,and compression types and levels ● set the maximum execution time ● limit hotspots that are generated in an Analysis Studio or Report Studio chart ● set watch list output retention time Settings for the maximumnumber of processes and connections has been improved.For some ser- vices,you can now set the maximum number of processes and the maximum number of high affinity and low affinity connections that the dispatcher can open to handle requests.For other services,you can set the maximum number of connections. For more information,see"Server Administration"(p.105). Transformer Integrated into IBMCognos 8 Transformer is now fully integrated into IBMCognos 8 Business Intelligence.This includes the ability to leverage IBMCognos 8 metadata,support for cube building on IBMCognos 8 platforms, and integration with IBMCognos 8 security. Administration and Security Guide 37 Chapter 1:What’s New? For more information,see the Transformer User Guide. My Activities and Schedules You can now manage IBMCognos 8 activities from My Activities and Schedules in IBMCognos Connection. You can viewa list of your activities that are current,past,upcoming on a specific day,or scheduled. You can filter the list so that only the entries that you want appear.A bar chart shows you an overview of activities. You can set run priority for entries.You can also viewthe run history for entries,specify howlong to keep run histories,and rerun failed entries. For more information,see"Activities Management"(p.287) My Watch Items Use the My Watch Items area of the portal to view and manage alerts for new report versions and rules that you have set for conditional report delivery (p.265).The My Watch Items functionality enables end users to monitor and manage business information that is critical to themfroma single location. As a report owner,you must allow report users to receive alerts and create watch rules for the reports.For information about how to enable these features for reports,see"Enable Watch Rules for a Report"(p.369). Report Alerts By enabling an alert on a report,you can nowbe notified when a newversion is available.Whenever a report is run and saved due to a scheduled or manual run,all subscribers receive an email that a new version is available. Subscriptions are saved to the Alerts tab of My Watch Items (p.265) and can be maintained from that location. For information about howto subscribe to a report,see"Add Yourself to or Remove Yourself from the Alert List for a Report"(p.368). Watch Rules A new watch rule action is available in Cognos Viewer.You can use watch rules to control when users are notified about the availability of new report versions.When a report is run and saved,a user-defined threshold condition is checked.If this condition satisfies a user's criteria,the report can be e-mailed. To create a watch rule (p.369),a saved report must be viewable in HTML format.You can select the data to be monitored and enter the threshold condition that will trigger the delivery of the report.Watch rules are saved to the Rules tab of My Watch Items (p.265),and can be maintained from that location. This feature lets users maintain their own report distribution preferences and avoid information overload. 38 Chapter 1:What’s New? Drilling Through on Multiple Values Drilling through is nowmore powerful and flexible.You can pass multiple items,such as products or countries,to a target report (p.379).You can nowuse this feature regardless of the type of drill- through path that was created.Drilling through is automatically enabled when you select multiple values. In previous versions,passing multiple values was available only within drill-through paths created in IBMCognos Connection. Go Directly to Target Report When Only One Target Report Exists When there is only one target report available,you can now go directly to the target report when you click the drill-through link in Cognos Viewer.If there are multiple target reports available,you see the Go To page.This behavior is automatic and works the same way whether the drill-through is defined in Report Studio or in a drill-through definition in IBMCognos Connection. For more information,see"Drill Through to Another Target"(p.378). Support for Microsoft Excel 2007 IBMCognos 8 supports Microsoft Excel 2007 native spreadsheets as a report format,in addition to the existing Microsoft Excel HTML formats.The Microsoft Excel 2007 XML format,also known as XLSX,provides a fast way to deliver native Excel spreadsheets to Microsoft Excel XP,Microsoft Excel 2003,and Microsoft Excel 2007. The use of a native Microsoft Excel format means that the spreadsheets are smaller and more usable. Because the new Office Open XML format is a recognized industry standard supported by ECMA International,the newformat provides the added benefit of an open,documented integration format that extends an open systems solution. The newformat appears in the run report user interface.Users of Microsoft Excel XP and Microsoft Excel 2003 must install the Microsoft Office Compatibility Pack,which provides file open and save capabilities for the new format. For more information about Excel format support,see"Excel Formats"(p.361). Saving Report Outputs to a File System You can now export report results directly to a server file system using IBMCognos Connection. You decide which formats to export,and select from a predefined set of directory locations.This feature makes it easy to integrate IBMCognos content into other external applications. IBMCognos 8 does not keep a record of the exported reports,but does prevent and resolve name conflicts that arise when the same file is saved multiple times to the same directory.You are responsible for managing the reports after export.An XML descriptor file with the same file name prefix is created,which can be used by an external application to locate and manage the exported reports. The export options appear as run options for a report,provided you were granted access to this feature.For more information,see"Save Report Output"(p.365). Administration and Security Guide 39 Chapter 1:What’s New? Resubmitting Failed Jobs and Reports You can resubmit a failed job or report (p.299).For example,you discover that 20 reports in a job containing 1,000 reports fail due to an invalid signon.The problem is corrected and the job is resubmitted so that only the 20 failed reports are rerun. In previous versions,if you submitted a report and it failed,the run options associated with the report were lost.Now the report can be resubmitted without having to reset the run options. Failed reports,jobs,and agent tasks can be resubmitted from the run history,accessed from the Past activities page of IBMCognos Administration,or accessed fromthe Actions page of the item. Resubmitting Failed Agent Tasks Failed agent tasks can nowbe resubmitted with their original data values (p.299).In previous versions, if a task failed,the data passed to the task was lost.Rerunning the agent may not solve this problem if the task is set to process new events only. Default Actions for Agent Items You can nowchoose a default action to use when an agent itemis selected in IBMCognos Connec- tion,rather than automatically opening the agent in Event Studio (p.386).The new choices are: ● show the most recent event list ● run the agent ● open the agent in Event Studio The default action is defined on the Agent tab of the item properties in IBMCognos Connection. Tabbed Portal Pages You can now create pages with multiple tabs that are easy to navigate.The new type of pages is also referred to as dashboards.Dashboards are created by using a new portlet named Multi-page. For more information,"Create a Dashboard with Multiple Tabs"(p.277). Global Filters and Enhanced Portal Interactivity You can select the dashboard context in the portal with one or more global filters.A global filter may be a prompt,a drill-up or drill-down action,or a report that is based on drill-through content. For example,you can add a prompt control to a portal page to automatically pass the selection to all reports on the page.When a prompt answer is changed,all related reports will refresh accordingly. So,if you answer a country prompt with Brazil,all related reports on the page will be filtered to show the data for Brazil. When these techniques are used on a tabbed dashboard,the context is passed to all corresponding sections of the dashboard.This functionality allows for a single selection to drive a number of reports at once. For more information,see"Adding Interactivity to Pages and Dashboards"(p.280). 40 Chapter 1:What’s New? Metric Studio Content in Portal Pages An IBMCognos Connection page or a dashboard can now display more types of Metric Studio metrics and a history chart.This new content can be added by using the following new portlets: ● Metric List Use to add a watchlist,an accountability list,a scorecard metric list,or a strategy metric list to the page. ● Metric History Chart Use to add a graphical chart that illustrates the historical performance of a metric to the page. For more information,see"Pages and Dashboards"(p.269). Support for Microsoft SharePoint Portal 2003 and 2007 SharePoint Portal 2003 and 2007 is nowsupported in IBMCognos 8.You can use this portal with the Cognos Navigator,Cognos Search,Cognos Viewer,Metric List,Metric History Chart,and Cognos Extended Applications portlets. For more information,see"Deploying Cognos Portlets to Microsoft SharePoint Portal Server 2003 and 2007"(p.456). Changed Features in Version 8.3 Listed below are changes to features since the last release.Links to directly-related topics are included. Updated IBMCognos Connection Look The IBMCognos Connection user interface was changed to provide more space for reports and information that you care about,and use less space for toolbars and functions.The new features include: ● The Launch menu,which replaces the Tools menu. This menu lets you access the IBMCognos 8 studios,Drill-through Definitions,and IBM Cognos Administration. ● The my area icon ,which lets you access the My Watch Items,My Preferences,and My Activities and Schedules areas in IBMCognos Connection. ● The portal style named Business. ● The updated Welcome to IBMCognos 8 page. For more information,see"Personalize the Portal"(p.262). More Information in the Go To Page Additional information was added to the Related Links page.Related Links,also known as the Go To page,is used to show users all of the drill-through paths available from a source report.The Administration and Security Guide 41 Chapter 1:What’s New? page now automatically shows more information,such as the target report and its location.This information helps you choose which drill-through path to use. Cognos Watchlist Portlet The Cognos Watchlist Portlet was replaced by the Metric List portlet.In the new portlet,the watchlist is now one of the selectable options"Pages and Dashboards"(p.269). Replaced Capability The capability known as Directory in previous releases of IBMCognos 8 was replaced by the fol- lowing,more granular capabilities: ● Data Source Connections ● Set capabilities and manage UI profiles ● Users,Groups,and Roles For more information,see"Secured Functions and Features"(p.223). 42 Chapter 1:What’s New? Chapter 2:IBMCognos 8 Administration After IBMCognos 8 is installed and configured,you can perform server administration,data management,security and content administration,activities management,and portal services administration. You can also perform the following IBMCognos 8 administrative tasks: ● automating tasks (p.44) ● setting up your environment (p.46) and configuring your database (p.47) for multilingual reporting ● installing fonts (p.47) ● setting up printers (p.48) ● configuring web browsers (p.50) ● allowing user access to Series 7 reports from IBMCognos Connection (p.52) ● restricting access to IBMCognos 8 (p.53) Aside from the typical administrative tasks,you can also customize the appearance (p.481) and functionality (p.503) of different IBMCognos 8 components. For information about potential problems,see the Troubleshooting section in this guide. IBMCognos Administration In IBMCognos Connection,you access IBMCognos Administration fromthe Launch menu.You must have the required permissions to access IBMCognos Administration functionality.See"Secured Functions and Features"(p.223). UseTabAdministrative Area To manage current,past,upcoming,and scheduled IBMCognos 8 entries. Status Activities To manage Content Manager computers.StatusContent Manager computers To performcontent store maintenance tasks.ConfigurationContent store To create and manage data sources connec- tions ConfigurationData sources Licensed Materials – Property of IBM 43© Copyright IBMCorp.2005,2009. UseTabAdministrative Area To deploy IBMCognos 8,to export froma source environment and then import in a target environment. ConfigurationDeployment To manage dispatchers and services Status Dispatchers and Services To create and manage distribution lists and contacts. ConfigurationDistribution lists and con- tacts To manage styles,Cognos portlets,and other portlets in IBMCognos Connection. ConfigurationPortals To create and manage printers.ConfigurationPrinters To control access to specific product func- tions,such as administration and reporting, and features within the functions,such as bursting and user defined SQL. SecuritySecurity To monitor system performance using sys- tem metrics and administer servers. StatusSystem,dispatcher,server, and service administration To tune server performance.StatusServer tuning To create and manage users,groups,and roles. SecurityUsers,groups,and roles Automating Tasks Virtually everything you can do with the product,you can achieve using the appropriate API,URL interface,or command line tool as illustrated in the table below. User interfaceAutomation interface DocumentGoal IBMCognos Report Studio BI Bus APIIBMCognos 8 Getting Started Guide Begin basic reporting with the IBM Cognos 8 SDK Framework Manager Script Player tool Framework Manager Developer Guide and Framework Manager User Guide Modify a model,or republish it to UNIX or Windows 44 Chapter 2:IBMCognos 8 Administration User interfaceAutomation interface DocumentGoal Framework Manager BI Bus APIIBMCognos 8 Developer Guide Modify an unpublished model using the updateMetadata and query- Metadata methods IBMCognos Connection BI Bus APIIBMCognos 8 Developer Guide Retrieve the query items available in the published package using the get- Metadata method IBMCognos Connection BI Bus APIIBMCognos 8 Developer Guide Grant capabilities to users Server Adminis- tration IBMCognos Connection BI Bus APIIBMCognos 8 Developer Guide Administer and implement security by authenticating and managing users Server Adminis- tration Cognos ViewerURL Inter- face IBMCognos 8 Developer Guide Run,view,and edit reports through a hyperlink in an HTML page Query Studio Using URLs to View,Edit,and Run Reports Report Studio IBMCognos Connection BI Bus APIIBMCognos 8 Developer Guide To manipulate objects in the content store Managing IBMCognos 8 Content in the Query Studio Report Studio Framework Manager IBMCognos Connection BI Bus APIIBMCognos 8 Developer Guide Administer reports IBMCognos Connection BI Bus APIIBMCognos 8 Developer Guide Administer servers and manage Dis- patchers Server Adminis- tration Administration and Security Guide 45 Chapter 2:IBMCognos 8 Administration User interfaceAutomation interface DocumentGoal Report StudioBI Bus API and report specification IBMCognos 8 Developer Guide Modify or author a report IBMCognos Connection URL Inter- face IBMCognos 8 Adminis- tration and Security Guide Modify the functionality of IBM Cognos 8 by hiding user Interface elements and restricting content browsing Report Studio Query Studio Setting up a Multilingual Reporting Environment You can create reports that showdata in more than one language and use different regional
https://www.techylib.com/el/view/tribestale/ibm_cognos_8
CC-MAIN-2018-34
en
refinedweb
std::basic_streambuf::sputbackc Puts back a character back to the get area. If a putback position is available in the get area (gptr() > eback()), and the character c is equal to the character one position to the left of gptr() (as determined by Traits::eq(c, gptr()[-1]), then simply decrements the next pointer (gptr()). Otherwise, calls pbackfail(Traits::to_int_type(c)) to either back up the get area or to modify both the get area and possibly the associated character sequence. The I/O stream function basic_istream::putback is implemented in terms of this function. [edit] Parameters [edit] pbackfail() returns, which is Traits::eof() on failure. [edit] Example #include <iostream> #include <sstream> int main() { std::stringstream s("abcdef"); // gptr() points to 'a' in "abcdef" std::cout << "Before putback, string holds " << s.str() << '\n'; char c1 = s.get(); // c1 = 'a', gptr() now points to 'b' in "abcdef" char c2 = s.rdbuf()->sputbackc('z'); // same as s.putback('z') // gptr() now points to 'z' in "zbcdef" std::cout << "After putback, string holds " << s.str() << '\n'; char c3 = s.get(); // c3 = 'z', gptr() now points to 'b' in "zbcdef" char c4 = s.get(); // c4 = 'b', gptr() now points to 'c' in "zbcdef" std::cout << c1 << c2 << c3 << c4 << '\n'; s.rdbuf()->sputbackc('b'); // gptr() now points to 'b' in "zbcdef" s.rdbuf()->sputbackc('z'); // gptr() now points to 'z' in "zbcdef"b No room to putback after 'z'
https://en.cppreference.com/w/cpp/io/basic_streambuf/sputbackc
CC-MAIN-2018-34
en
refinedweb
From that webpage, someone has just used Cython to implement them -- they don't come with Cython. You can install that package in the same way as most packages on PyPI: Download bintrees-0.3.0.tar.gz from PyPI Extract the tarball. Run sage -python setup.py install You'll be able to run import bintrees from within your Sage sessions now. As for using the package, you'll have to look at that package's specific documentation. The Cython documentation is not built / included in Sage, but you can find it at. For other packages, it varies. For example, you can find the Pari documentation under $SAGE_ROOT/local/share/pari/doc/. None of this package specific documentation is specifically made available from within the notebook. When you type in cython? from the notebook, you're getting the documentation a specific function called cython within Sage -- this is different than the documentation for Cython itself. I'm not sure what you mean about "optional included sub-packages".
https://ask.sagemath.org/answers/11585/revisions/
CC-MAIN-2018-34
en
refinedweb
C++ File Input/Output C-Style Conversion Routines C++ uses the << operator for formatted output and the >> operator for formatted input. C has its own set of output functions (the pstd::printf family) and input conversion functions (the std::scanf functions). This section goes into the details of these C-style conversion routines. The std::printf Family of Output Functions C uses the std::printf function call and related functions for output. A std::printf call consists of two parts: a format that describes how to print the data and a list of data to print. std::printf(format, parameter-1, parameter-2, ...); The format string is printed exactly. For example: std::printf("Hello World\n"); prints: Hello World To print a number, you must put a % conversion in the format string. For example, when C sees %d in the format string, it takes the next parameter from the parameter list (which must be an integer) and prints it. Figure 16-1 shows how the elements of the std::printf statement work to generate the final result. The conversion %d is used for integers. Other types of parameters use different conversions. For example, if you want to print a floating-point number, you need a %f conversion. Table 16-9 lists the conversions. Many additional conversions also can be used in the std::printf statement. See your reference manual for details. The std::printf function does not check for the correct number of parameters on each line. If you add too many, the extra parameters are ignored. If you add too few, C will make up values for the missing parameters. Also C does not type check parameters, so if you use a %d on a floating point number, you will get strange results. Why does 2 + 2 = 5986? (Your results may vary.) Example 16-7: two/two.c #include <cstdio> int main( ) { int answer; answer = 2 + 2; std::printf("The answer is %d\n"); return (0); } Why does 21 / 7 = 0? (Your results may vary.) Example 16-8: float3/float3.c #include <cstdio> int main( ) { float result; result = 21.0 / 7.0; std::printf("The result is %d\n", result); return (0); } The function std::fprintf is similar to std::printf except that it takes one additional argument, the file to print to: std::fprintf(file, format, parameter-1, parameter-2, ...); Another flavor of the std::printf family is the std::sprintf call. The first parameter of std::sprintf is a C-style string. The function formats the output and stores the result in the given string: std::sprintf(string, format, parameter-1, parameter-2, ...); For example: char file_name[40]; /* The filename */ /* Current file number for this segment */ int file_number = 0; std::sprintf(file_name, "file.%d", file_number); ++file_number; out_file = std::fopen(file_name, "w"); WARNING: The return value of std::sprintfdiffers from system to system. The ANSI standard defines it as the number of characters stored in the string; however, some implementations of Unix C define it to be a pointer to the string. The std::scanf Family of Input Functions Reading is accomplished through the std::scanf family of calls. The std::scanf function is similar to std::printf in that it has sister functions: std::fscanf and std::sscanf. The std::scanf function reads the standard input (stdin in C terms, cin in C++ terms), parses the input, and stores the results in the parameters in the parameter list. The format for a scanf function call is: number = scanf(format, ¶meter1, . . .); - number - Number of parameters successfully converted. - format - Describes the data to be read. - parameter1 - First parameter to be read. Note the & in front of the parameter. These parameters must be passed by address. WARNING: If you forget to put & in front of each variable for std::scanf, the result can be a "Segmentation violation core dumped" or "Illegal memory access" error. In some cases a random variable or instruction will be modified. This is not common on Unix machines, but MS-DOS/Windows, with its lack of memory protection, cannot easily detect this problem. In MS-DOS/Windows, omitting & can cause a system crash. There is one problem with this std::scanf: it's next to impossible to get the end-of-line handling right. However, there's a simple way to get around the limitations of std::scanf--don't use it. Instead, use std::fgets followed by the string version of std::scanf, the function std::sscanf: char line[100]; // Line for data std::fgets(line, sizeof(line), stdin); // Read numbers std::sscanf(line, "%d %d", &number1, &number2); Finally, there is a file version of std::scanf, the function std::fscanf. It's identical to scanf except the first parameter is the file to be read. Again, this function is extremely difficult and should not be used. Use std::fgets and std::sscanf instead. Page<<
https://www.developer.com/net/cplus/article.php/10919_2119781_7/C-File-InputOutput.htm
CC-MAIN-2018-34
en
refinedweb
Parent Component Manager Parent Component Manager When using Apache Cocoon it is sometimes neccessary to obtain components from other sources than the user.roles file, or preferable to have a common component manager for several web applications. The pattern chosen for Cocoon is the dynamic loading of a component manager class. The initialization parameter parent-component-manager in web.xml specifies a class that will be loaded, instantiated and used as a parent component manager for Cocoon's component manager. The recommended procedure is for the class, when it is initialized, to create a delegate in the form of an ExcaliburComponentManager, configure it by looking up a Configuration object via JNDI, and delegate any requests to it. In order to provide a way to pass parameters to the parent component manager class (the class specified in parent-component-manager), Cocoon will instantiate the class via the constructor that takes a single String argument, passing anything to the right of the first '/' in the parameter value to the constructor. Subsequently Cocoon examines whether the class implements org.apache.avalon.framework.logger.LogEnabled and/or org.apache.avalon.framework.activity.Initializable and calls setLogger and/or initialize, as appropriate. The instance is then used as a parent component manager. Since that didn't make much sense in itself, let's look at the sample. The goal is to define a component that can give us the time of day and let it be managed by a parent component manager. So, first we need to put a Configuration object into JNDI, and then grab that object, use it to configure an ExcaliburComponentManager, and pass on any requests to that manager. Step 1: Creating a configuration object We'll do this the quick and dirty way. The static initializer of a class will create a Configuration instance with a single role and bind it to org/apache/cocoon/samples/parentcm/ParentCMConfigration. The following code was taken from org/apache/cocoon/samples/parentcm/Configurator.java public class Configurator { static { try { // // Create a new role. // DefaultConfiguration config = new DefaultConfiguration("roles", ""); DefaultConfiguration timeComponent = new DefaultConfiguration("role", "roles"); timeComponent.addAttribute("name", Time.ROLE); timeComponent.addAttribute("default-class", TimeComponent.class.getName()); timeComponent.addAttribute("shorthand", "samples-parentcm-time"); config.addChild(timeComponent); // // Bind it - get an initial context. // Hashtable environment = new Hashtable(); environment.put(Context.INITIAL_CONTEXT_FACTORY, MemoryInitialContextFactory.class.getName()); initialContext = new InitialContext(environment); // // Create subcontexts and bind the configuration. // Context ctx = initialContext.createSubcontext("org"); ctx = ctx.createSubcontext("apache"); ctx = ctx.createSubcontext("cocoon"); ctx = ctx.createSubcontext("samples"); ctx = ctx.createSubcontext("parentcm"); ctx.rebind("ParentCMConfiguration", config); } catch (Exception e) { e.printStackTrace(System.err); } } } To make sure the static initializer runs we make Cocoon force-load the class by making a change to the web.xml file: <init-param> <param-name>load-class</param-name> <param-value> <!-- For IBM WebSphere: com.ibm.servlet.classloader.Handler --> <!-- For Database Driver: --> @database-driver@ <!-- For parent ComponentManager sample: This will cause the static initializer to run, and thus the Configuration object to be created and bound. --> org.apache.cocoon.samples.parentcm.Configurator </param-value> </init-param> Step 2: Write the component manager Now that the configuration object is sitting there waiting for us, let's craft the component manager. Please see the file org/apache/cocoon/samples/parentcm/ParentComponentManager.java for an example. It is too much to paste in here. Step 3: Tell Cocoon to use the component manager Change the web.xml file to: <init-param> <param-name>parent-component-manager</param-name> <param-value>org.apache.cocoon.samples.parentcm.ParentComponentManager/(remove this line break) org/apache/cocoon/samples/parentcm/ParentCMConfiguration</param-value> </init-param> Cocoon will now do the following: First, it will split the parameter value at the first slash, in this case ending up with the strings "org.apache.cocoon.samples.parentcm.ParentComponentManager" and "org/apache/cocoon/samples/parentcm/ParentCMConfiguration". The first string is the class to instantiate. The second is the parameter that will be passed to the constructor. Next, Cocoon loads the component manager class and uses reflection to find a constructor that will accept a single String argument. Upon finding one, it instantiates the class in a manner similar to: ComponentManager cm = new org.apache.cocoon.samples.parentcm.ParentComponentManager( "org/apache/cocoon/samples/parentcm/ParentCMConfiguration"); After this Cocoon checks whether the parent component manager class implements Initializable and/or LogEnabled. Since the ParentComponentManager class implements both, Cocoon does the following (with simplification): ((LogEnabled) cm).enableLogging(logger); ((Initializable) cm).initialize(); Finally, the instance is used as parent component manager of Cocoon's own component manager. Step 4: Use the component Cocoon components can now use the ComponentManager given to them by Cocoon to look up the component managed by the parent component manager: The following code was taken from org/apache/cocoon/samples/parentcm/Generator.java public void setup(SourceResolver resolver, Map objectModel, String src, Parameters par) throws ProcessingException, SAXException, IOException { Time timeGiver = null; try { timeGiver = (Time) manager.lookup(Time.ROLE); this.time = timeGiver.getTime (); } catch (ComponentException ce) { throw new ProcessingException ("Could not obtain current time.", ce); } finally { manager.release(timeGiver); } } And that concludes the tour. A parent component manager was initialized with a configuration obtained via JNDI and its components used by a Cocoon generator. Errors and Improvements? If you see any errors or potential improvements in this document please help us: View, Edit or comment on the latest development version (registration required).
http://cocoon.apache.org/2.1/developing/parent-component-manager.html
CC-MAIN-2016-36
en
refinedweb
The fourth constructor was added by Java 2. The first edition benefitted from the careful and detailed criticism of Roy Hoffman from IBM, Glen Langdon from the University of California at Santa Cruz, Debra Lelewer from California Polytechnic State University, Eve Riskin from the University of Washington, Ibrahim Sezan from Kodak, and Peter Swaszek from the University of Rhode Island. PA3 17 amphotroplc retroviral packaging cell lme 12. Thisstepisusuallyautomatedand ismonitoredbycamerasandoptoelectronic measuringsystems. For example, the voice spectrum refers to the optiлns pattern of a speech signal, suggest that the retrograde amnesia of such patients is рptions dependent and that larger lesions produce retrograde amnesia that goes back farther in time. Lewis, to support an ever-shrinking pool of legacy code, and are not recommended for new code. Hydrolysis of D.Weir, Gand Fleischer, N (1994) Rtbozyme-mediated attenuation of pancreatic P-cell glucokmase expression m transgemc mice results in impaired glucose-Induced msulm secretion Proc Natl.4963696378, 1989. One issue is that recommendations from motivation- oriented job design techniques and engineering-oriented job design techniques often conflict. ; the Gifted Child Today, Prufrock Press, Waco, Texas; the Journal of Secondary Gifted Education, Prufrock Press, Waco, Binary options brokers japan Roeper Review, the Roeper Institute, Bloomfield Hills, Michigan; the Journal for the Education of the Gifted, the Association for hourly binary options strategy Gifted, of the Council for Exceptional Children, Brokkers Binary options brokers japan. Each line is a representative patient. Such students might have few, if any, educated role models in their environments. (1996). If we increase the vector dimension to 64, we get an SNR value of 17. The levels of function begin in the spinal cord and end in the cortex. The left frontal lobe of man and the suppression of habitual responses in verbal categorical behavior. 16). Alpha interferon is produced primarily by activated leukocytes and appears within 4 to 6 hours after viral stimulation.and Doniger, J. Although the interactionist perspective emphasizes the importance of both personal and situational char- acteristics, little theoretical explanation as to how per- son and environment variables interact binary options brokers japan affect behavior was provided. (2000). 1, a functional pituitary thyrotropin-producing neoplasm was transplanted into a rat, and the thyroid gland was sampled at various periods. Apply the mixture to a Binary options neteller G-50 column brгkers order to bnary the probe from the unincorporated deoxynucleotides.it jjapan. println("a may be cast to Object"); if(b instanceof Object) System. Accept(id, these schools recognize that an emphasis on developing self-discipline and preventing misbehav- ior with effective classroom management is the most effective way to reduce the need to correct misbehavior and remediate serious behavior problems.Schlessinger, J. The reader will immediately binary options brokers japan the com- plete analogy to the optiosn binary options brokers japan promotion and progression in neoplastic development. Qualitative research on careers of optios coaches to identify the factors of their consistent excellence would be a challenge for future researchers and practitioners. In Alzheimers disease, the decline is enhanced, but there are no compensatory mechanisms. This means that the equilibrium magnetic field is straight, has components in both the y and z directions, binary options brokers japan been an important area of research and intervention because it is frequently caused or made worse by behavioral factors that are amenable to cognitivebehavioral intervention. Conclusion Neurons can respond selectively to information in their receptive field. Ungerleider, L. Developmental therapy Theory into prac- tice. We obtained diethyl phosphorocyanidate (V) in only small yield by the action binary options brokers japan potassium cyanide on the corresponding 6 phosphorochloridate. Journal of Opions Violence, 16, 784807. 201. One way that they have such an influence is by altering the sus- optoins of cortical neurons to the effects of environmental stimuli.Rogers E. ,Tone,T,ROSSI,J. Collins, K. (1992) Introduction. Cleavage Reacton Cyces In Vitro Eachreaction cycleconsistedof areactionphaseandadenaturatlonphase. 49) is a quadratic equation in k2. For example, this line of code creates a PrintWriter that is connected to console output PrintWriter pw new PrintWriter(System. 9230813085, A. On the whole, follow-up studies of the course of schizophrenia indicate that there may be very different types of course, ranging from complete cure to severe disabling chronic forms. Challenges and Dilemmas of Trust Implications for Practice 6. Binary options brokers japan, we binary options brokers japan abandon the view that schizophrenia and binary options brokers japan carrier can be sepa- rated in the way as one separates an infection from its victim, that is, that the subjectivity (Self) of the subject and his illness can be treated as independent regions. s0030 Data processing Mechanical Business detail Nature outdoors Artistic Managing Social facilitating Helping C I EA S f0010 FIGURE 2 Interest circle represented by Hollands six RIASEC types and Traceys eight types. Politicians should be moti- vated to implement a number of road safety measures, citizens should be motivated to ask for the implemen- tation bnary road safety measures, and drivers should www howtotradebinaryoptions com motivated to conform to the safety recommendations and legislation. Wig Studies on Schizophrenic Symptomatology in Peru 70 A. (2002). Blanchard J. Aubin F, and it contains the fibers from the prefrontal cortex. 150. The clinical effects resulting in these syndromes are thus not dependent on the bulk of the neoplastic tissue but on its function- ing within the organism (Eckhardt, 1994). Training may pro- ceed from looking at sport-specific pictures or films, there is one major problem we have to resolve. S0045 s0040 2. Because the CMA diagram is log-log the bounding optionns curves extend infinitely to the left and right of the figure and also infinitely above and below it; however no new regions exist outside of what is sketched in Fig. Essen,L. The equilibrium level of output is 800 billion since this is the only level of production at which output equals aggregate bina ry. 147) gives 2 ivgΩgghg3g2ihg h g0. Lack of perceptual organization, reduced concen- tration (WISC, observation, concentration biinary 2. (viii) Escaping gas was washed free from phosphorus compounds brokerss water contained in J. Effect of hormonal status on incidence and morphology of hepatomas in rats fed N,N-dimethyl-p-(m-tolylazo)aniline. Watkinson 260. (1993) The Binary option minimum deposit 100 family study. As expected, the value systems of parents, teachers, and children were consonant with each other (the expressed values were primarily indi- vidualist) and there broker s no cultural conflict among the three groups. The thermal force vector op tions of known values. (1999). Neuropsychologia 29601618, 1991.1984a) and magnesium (Mills et binary option strategies for beginners. Mothers appear to be particularly critical in the development of responsible adolescent sexual behavior. BIOLOGICAL INDICATORS During the last two decades, a considerable number of biological indices have been investigated with the aim to find measurable indicators of, and confirmatory tests for, y) {} double perimeter() const { return 4a; } Binarry endif 10. For example, for every learning-disabled child with coordination problems, there is a learning-disabled child whose coordination is better than normal. (1996) The prodromal phase of first-episode psycho- sis past and current conceptualizations. Biol. From this perspective, sport psychology examines mainly the short- and long-term impact of psychological binar on athletic performance and the potential effects opti ons systematic participation (involvement) in binary options brokers japan.Naess P. Figure 10. Wotherspoon, H. Marshall, 2000, pp. (1984) Deficits in sensory gating in schizophrenic patients and their relatives. Nature, Loud. (14. 50) becomes (13. External agents can cause Binary strategy forex options opposing symptoms binary options brokers japan rapidly. This results in ch1 containing Y, the next character in the ASCII (and Unicode) sequence. The RACE procedure 1sa techmque based on nested PCR reactions, Tarrier et al 75 found Page 210 Jaan FOR SCHIZOPHRENIA A REVIEW 191 that, immediately after a 5-week intervention, the change in a measure of total positive symptoms was review of binary option brokers greater in their group treated with CSE than in the control group who had been instructed in problem solving. Broekrs 2. Bosi and R. Zhang, if the environment is so arranged that the system is exposed to stimuli of one type, the cells in the system develop a optiтns for those stimuli. This hypothesis, known as the rapid jap an cessing hypothesis, led to the proposal that subjects given training in the discrimination of rapidly changing binnary cal signals would improve in their ability to discriminate between the signals and in their comprehension of audi- tory language. (1959). Raz et al. Chemically induced skin carcinogenesis in a transgenic mouse line (TG·AC) carrying a v-Ha-ras gene. Together with schizophrenia simplex or simple schizophrenia, which was introduced by Bleuler, Kraepelins paranoid, hebephrenic and catatonic subtypes formed Binary options brokers japan group of schizophrenias. The temporal binary options brokers japan codes relational properties of objects. Leadermember exchange theory jap an assist understanding the impact of culture on leader- ship in a multicultural setting. Since autonomous consumption is un- related to income, autonomous consumption is 50 for all levels of in- come. 6 Oncethesidesarejoinedandtheend- 6blocksareinplace,thetopandtheback aregluedtothesides..Perry P. Many authorities and sectors are involved in water use and water quality control brrokers watersheds. Inducible Ribozyme Expression Using the Tetracycline-Responsive Promoter System In addition to constant m vtvo potions of optiлns ribozyme, a binary options brokers japan expression would offer a deeper understanding of the btological role of the particular gene of interest, binary options brokers japan his or her personality dis- positions interacting with environmental binary options brokers japan. Distinctive subtopics addressed most often binary options demo contest researchers emerged optins defense strate- gies, capital punishment and sentencing, characteris- tics of individual jurors. Ranging from complete collapse in some people to simply staring off into space in others, J. Experiences and practices of organiza- tional psychology and management already available in non-sports high achievement settings could be beneficial for sport. 46) (3. 197 Page 199 198 PART II CORTICAL ORGANIZATION (A) The Three Principles of Motor-System Function To illustrate the large binary options brokers in usa of brain regions that may be brought into play in the execution of a movement, D. Together, these skills are called phonological awareness and usually emerge in children between 2 and 6 years of age. The binary exponential notation trade binary options for me a binary option robot strategy 110. 9 Exercises. In the past, courts have rbokers reluctant to award custody to the father in divorce cases. 18(8)177189,. New Optinos Longman. Two of JComboBoxs constructors are shown here JComboBox( ) JComboBox(Vector v) Here, v is a vector that initializes the combo box. In these methods, the basic shape binary options brokers japan is modified to obtain the upwinding effect. These are often captured in everyday speech by such statements as do unto others as you binary options brokers japan have them do unto you and dont binary options brokers japan the hand that feeds you. 84) (10. Luciana, Binary options brokers japan. FileOutputStream. Because each level of the nervous system had developed at a different time, Hughlings-Jackson assumed that each must have some func- tional independence. Therefore, future approaches should not focus on equal rights politics and binary options brokers japan diversity alone. 0 The isotherm of 125C will not normally be a straight line owing to the bilinear nature of the elements. 0 6. Bilateral motor interaction Perceptual-motor binary options brokers japan of partial and complete split-brain patients.1995). Supreme Court defined two ibnary of sexual harassment. In reality, much larger than those m any other mineralized vertebrate tissues (26) (bone, dentin, dental cementum). So long as the true scale of the problem of sexual assault binary option calculator online neither be accurately measured nor reliably estimated, simply install all of the programs on the same machine, start rmiregistry, start AddSever, and then execute AddClient using binary options live trends command line java Opti ons 127. Additional brokres arise because a variety of models and definition of culture and climate in organizations exist and because culture and climate are not unequivocally distinguishable. A sex bianry in effect of temporal-lobe neurosurgery on design preference. If there are interactions between the bonds, these molecular states will become more delocalized and there will be additional broadening into bands. 16) in (1. National Clinical Practice Guideline-No. Milner, but at other times only one had been seen.Langevin R. 5 2 Page 488 478 15 W A VEL El-B A 5 ED COM PRE 5 5 ION where we have used the substitution x t a. The data from recent empirical studies, most of japn were performed according to a more sophisticated design for the evaluation of drug effects on negative symptoms, biary that these drugs are more useful for binary options brokers japan negative symptoms in acute schizophrenic patients than the classical neuroleptics such as haloperidol and chlorpromazine. Kissling W. The 22,23,25, brokres 27 kDa proteins were identified as amelogenins by protein sequencing. This phenomenol- ogical approach to defining attributions cross-culturally requires that binary options brokers japan elicit attributions of achieve- ment behavior from people of different cultures rather than imposing inappropriate definitions binary options brokers japan their achieve- ment b inary. World of work information Information that often includes job duties, education binary options brokers japan training required, salaries and other benefits. Page 51 52 Beeler and Tang 7. 6283. 139) where λ does binary options brokers japan depend brokerrs position, then the system is stable against any further helical perturbations. deleteCharAt( ). Lelliott P. How much neurotransmitter is needed to send a opttions. 0, 1 mM EDTA, 1 binary options brokers japan DTT. This is sent to any listeners that previously registered an interest in receiving item event notifications from that component. Some Properties of Uniform Step Size Quan- tizers. Not only opions clozapine more efficacious in patients suffering from treatment-resistant schizophrenia, but it has also been shown to have advantages in terms of the treatment of negative symptoms 17, 21 as well as Page 97 78 SCHIZOPHRENIA in various psychosocial domains including optiьns of life Binary options live signals scam and prevention of suicidal behaviour 23. Most of the time, super can also be used to call methods that binary options brokers japan hidden by a subclass. You need a book on bats. ), Brokers handbook of psychological disorders, 3rd ed. Comprehension of emotional prosody following unilateral hemispheric lesionsProcessing defect versus distraction defect. There is evi- dence that the older members of all societies are more allocentric than the younger members. Torrès, placebo effects) that self-report measures introduce, investigations whose results are based binary options brokers japan a sufficient number of binary option trading taxes, and research that explores the possible mediating bbrokers of social support. 5b) (2. (1995). The sec- ond part of the brгkers is that the map binary options brokers japan in the hippocampus. Conclusion Further Reading GLOSSARY maxi-cycle A series of life stages characterized as a sequence of growth, exploration, establishment, maintenance, and disengagement. Personnel Selection in the British Forces. Inductive electric fields are negligible so the electric field is almost entirely electrosta- tic, i. 309 Binaryy. Because 4 3 2 Xn 0 -1 -2 -3 -4 -5 FIGURE 8. Binary options trading callandput. The figure given by Mari and Streiner 5 ibnary 6; that is, 1Bethlem opptions Maudsley NHS Trust, Maudsley Hospital, Denmark Hill, London SE5 8AZ, UK Page 248 PSYCHOTHERAPIES FOR SCHIZOPHRENIA COMMENTARIES 229 six families need to be treated to prevent one relapse or rehospitalization over 912 months. IsDirectory(). Cell 42,737-750 9 Brokerrs, A. For example, it is likely that emotion-focused training (e. This percentage binary option strategies with only 9 of the neurons in naive monkeys. Infinite) { ans. Wave normal surfaces of plasma waves are optioons more complicated because n usually depends on θ. 179. (10. (13. Recent studies on the SCOR test in patients binary options brokers japan schizophrenia have identified two subgroups a large subgroup of low to moderately aroused phasic SCOR non-responders; and a Page 46 DIAGNOSIS OF SCHIZOPHRENIA A REVIEW 27 smaller subgroup of SCOR responders jappan which electrodermal tonic arousal is abnormally high 141, 142. Figuratively, thebackisalsobraced,butitsstripsofwood runparallelfromlefttorightwithonecross- grained strip running down the length of the backs glue joint.Secrets to binary option trading
http://newtimepromo.ru/binary-options-brokers-japan-14.html
CC-MAIN-2016-36
en
refinedweb
Skip navigation links public interface Content DesignTime@Runtime content abstraction for modification purposes. Used to modify the underlying components in a running system. It restricts the changes to those designed to be modified. An ADF application is represented by objects at all the MVC layers. This family of interfaces allows the client code to interact on a simple abstraction and have the implementation do the right things with all the cooperating layers and artifacts in the application. It's expected that this interface can be implemented by various view layer technologies, most especially, ADF Swing and ADF Faces. Content.ComponentRef findComponent(java.lang.String uniqueId) uniqueId- each component is known by a unique string representation insertFragment. ContentUsage createUsage(java.lang.String defName) defName- contains the fully qualified def name of the usage to add ContentUsage ContentUsage createUsage(java.lang.String defName, java.lang.String elementName, java.lang.String namespaceURL) defName- elementName- the view document element name namespaceURL- the view document namespace URL void insertChild(Content.ComponentRef parent, ContentUsage newChild) parent- The component within the content to insert into, which was retrieved by a call to findComponent newChild- the usage to add to the view Note: Any content specific requirements will be met Ex: All changes are registered with the faces change API. Skip navigation links
http://docs.oracle.com/cd/E35521_01/apirefs.111230/e18581/oracle/adf/dtrt/view/Content.html
CC-MAIN-2016-36
en
refinedweb
C# Programming/Print version Introduction[edit] C# (pronounced "See Sharp") is a multi-purpose computer programming language suitable for all development needs. Introduction[edit].[edit][edit][edit] -." To compile your first C# application, you will need a copy of a .NET Framework SDK installed on your PC. There are two .NET frameworks available: Microsoft's and Mono's. Microsoft .NET[edit][edit] For Windows, Linux, or other Operating Systems, an installer can be downloaded from the Mono website. For Linux, one good compiler option is cscc, which can be downloaded for free from the DotGNU Portable.Net project page. The compiled programs can then be run with ilrun. Linux[edit][edit]. Language Basics[edit]. Reasoning[edit][edit] Namespace[edit] Namespaces are named using Pascal Case (also called UpperCamelCase) with no underscores. This means the first letter of every word in the name is capitalized. For example: MyNewNamespace. Also, note that Pascal Case also denotes that acronyms of three or more letters should only have the first letter capitalized (MyXmlNamespace instead of MyXMLNamespace). Assemblies[edit] If an assembly contains only one namespace, they should use the same name. Otherwise, Assemblies should follow the normal Pascal Case format. Classes and Structures[edit][edit] Follow class naming conventions, but add Exception to the end of the name. In .Net 2.0, all classes should inherit from the System.Exception base class, and not inherit from the System.ApplicationException. Interfaces[edit] Follow class naming conventions, but start the name with I and capitalize the letter following the I. Example: IFoo The I prefix helps to differentiate between Interfaces and classes and also to avoid name collisions. Functions[edit] Pascal Case, no underscores except in the event handlers. Try to avoid abbreviations. Many programmers have a nasty habit of overly abbreviating everything. This should be discouraged. Properties and Public Member Variables[edit] Pascal Case, no underscores. Try to avoid abbreviations. Parameters and Procedure-level Variables[edit] Camel Case (or lowerCamelCase). Try to avoid abbreviations. Camel Case is the same as Pascal case, but the first letter of the first word is lowercased. Class-level Private and Protected Variables[edit] Camel Case with a leading underscore. Always indicate protected or private in the declaration. The leading underscore is the only controversial thing in this document. The leading character helps to prevent name collisions in constructors (a parameter and a private variable having the same name). Controls on Forms[edit][edit] Pascal Case. The use of SCREAMING_CAPS is discouraged. This is a large change from earlier conventions. Most developers now realize that in using SCREAMING_CAPS they betray more implementation than is necessary. A large portion of the .NET Framework Design Guidelines is dedicated to this discussion. Example[edit]. Fields, local variables, and parameters[edit] C# supports several program elements corresponding to the general programming concept of variable: fields, parameters, and local variables. Fields[edit]). Local variables[edit]. Parameter[edit] Types[edit]. Value types[edit] derived. Reference types[edit] Reference types are managed very differently by the CLR. All reference types consist of two four[edit] very difficult to spot. Avoid boxing, if possible. object getInteger = "97"; int anInteger = (int) getInteger; // No compile-time error. The program will crash, however. The built-in C# type aliases and their equivalent .NET Framework types follow: Integers[edit] Floating-point[edit] Other predefined types[edit] Custom types[edit] The predefined types can be aggregated and extended into custom types. Custom value types are declared with the struct or enum keyword. Likewise, custom reference types are declared with the class keyword. Arrays[edit][edit][edit] Values of a given type may or may not be explicitly or implicitly convertible to other types depending on predefined conversion rules, inheritance structure, and explicit cast definitions. Predefined conversions[edit] Many predefined value types have predefined conversions to other predefined value types. If the type conversion is guaranteed not to lose information, the conversion can be implicit (i.e. an explicit cast is not required). Inheritance polymorphism[edit][edit]. Overflow exception control[edit] Others[edit]]; Conditional,[edit] A conditional statement decides whether to execute code based on conditions. The if statement and the switch statement are the two types of conditional statements in C#. if statement[edit] As with most of C#, the if statement has the same syntax as in C, C++, and Java. Thus, it is written in the following form: if (condition) { // Do something } else { // Do something else }[edit][edit] An iteration statement creates a loop of code to execute a variable number of times. The for loop, the do loop, the while loop, and the foreach loop are the iteration statements in C#. do ... while loop[edit][edit][edit][edit][edit] A jump statement can be used to transfer program control using keywords such as break, continue, return, yield, and throw. break[edit][edit]); } } } return[edit] The return keyword identifies the return value for the function or method (if any), and transfers control to the end of the function. namespace JumpSample { public class Entry { static int Fun() { int a = 3; return a; // the code terminates here from this function a = 9; // here is a block that will not be executed } static void Main(string[] args) { int OnNumber = Fun(); // the value of OnNumber is 3, not 9... } } } yield[edit] static IEnumerable MyCounter(int stop, int step) { int i; for (i = 0; i < stop; i += step) { yield return i; } } static void Main() { foreach (int j in MyCounter(10, 2)) { Console.WriteLine("{0} ", j); } // Will display 0 2 4 6 8 } } throw[edit]. The following is also bad practice: try { .. } catch (Exception ex) { throw ex; } The CLR will now think. } } References[edit] Classes[edit] Namespaces are used to provide a "named space" in which your application resides. They're used especially to provide the C# compiler a context for all the named information in your program, such as variable names. Without namespaces, for example, you wouldn't be able to make.() { } } Encapsulation is depriving the user of a class of information he does not need, and preventing him." Advanced Concepts[edit] string name; public Room(int l, int w, int h) { length = l; width = w; height = h; } } class Home { int numberOfRooms; int plotSize; string locality; //(test); } }." An INTERFACE in C# is a type definition similar to a class, except that it purely represents a contract between an object and its user. It can neither be directly instantiated as an object, nor can data members be defined. So, an interface is nothing but a collection of method and property declarations. The following defines a simple interface: interface IShape { double X { get; set; } double Y { get; set; } void Draw(); } A CONVENTION used in the .NET Framework (and likewise by many C# programmers) is to place an "I" at the beginning of an interface name to distinguish it from a class name. Another common interface naming convention is used when an interface declares only one key method, such as Draw() in the above example. The interface name is then formed as an adjective by adding the "...able" suffix. So, the interface name above could also be IDrawable. This convention is used throughout the .NET Framework. Implementing an interface is simply done by inheriting off it and defining all the methods and properties declared by the interface after that. For instance, class Square : IShape { private double _mX, _mY; public void Draw() { ... } public double X { set { _mX = value; } get { return _mX; } } public double Y { set { _mY = value; } get { return _mY; } } } Although a class can inherit from one class only, it can inherit from any number of interfaces. This is a simplified form of multiple inheritance supported by C#. When inheriting from a class and one or more interfaces, the base class should be provided first in the inheritance list, followed by any interfaces to be implemented. For example: class MyClass : Class1, Interface1, Interface2 { ... } Object references can be declared using an interface type. For instance, using the previous examples, class MyClass { static void Main() { IShape shape = new Square(); shape.Draw(); } } Interfaces can inherit off of any number of other interfaces, but cannot inherit from classes. For example: interface IRotateable { void Rotate(double theta); } interface IDrawable : IRotateable { void Draw(); } Additional details[edit] Access specifiers (i.e. private, internal, etc.) cannot be provided for interface members, as all members are public by default. A class implementing an interface must define all the members declared by the interface. The implementing class has the option of making an implemented method virtual, if it is expected to be overridden in a child class. There are no static methods within an interface, but any static methods can be implemented in a class that manages objects using it. In addition to methods and properties, interfaces can declare events and indexers as well. For those familiar with Java, C#'s interfaces are extremely similar to Java's.. A delegate is a way of telling C# which method to call when an event is triggered. For example, if you click a Button on a form, the program would call a specific method. It is this pointer that is a delegate. Delegates are good, as you can notify several methods that an event has occurred, if you wish so.event - User releases the mouse button - The .NET framework raises a MouseUpevent - The .NET framework raises a MouseClickevent - The .NET framework raises a Clickedevent] An event is a special kind of delegate that facilitates event-driven programming. Events are class members that cannot be called outside of the class regardless of its access specifier. So, for example, an event declared to be public would allow other classes the use of += and -= on the event, but firing the event (i.e. invoking the delegate) is only allowed in the class containing the event..[edit]. Extension methods are a feature new to C# 3.0 and allow you to extend existing types with your own methods. While they are static, they are used as if they are normal methods of the class being extended. Thus, new functionality can be added to an existing class without a need to change or recompile the class itself. However, since they are not directly part of the class, extensions cannot access private or protected methods, properties, or fields. Extension methods should be created inside a static class. They themselves should be static and should contain at least one parameter, the first preceeded by the this keyword: public static class MyExtensions { public static string[] ToStringArray<T>(this List<T> list) { string[] array = new string[list.Count]; for (int i = 0; i < list.Count; i++) array[i] = list[i].ToString(); return array; } // to be continued... } The type of the first parameter (in this case List<T>) specifies the type with which the extension method will be available. You can now call the extension method like this: List<int> list = new List<int>(); list.Add(1); list.Add(2); list.Add(3); string[] strArray = list.ToStringArray(); // strArray will now contain "1", "2" and "3". Here is the rest of the program: using System; using System.Collections.Generic; public static class MyExtensions { ... // continued from above public static void WriteToConsole(this string str) { Console.WriteLine(str); } public static string Repeat(this string str, int times) { System.Text.StringBuilder sb = new System.Text.StringBuilder(); for (int i = 0; i < times; i++) sb.Append(str); return sb.ToString(); } } class ExtensionMethodsDemo { static void Main() { List<int> myList = new List<int>(); for (int i = 1; i <= 10; i++) myList.Add(i); string[] myStringArray = myList.ToStringArray(); foreach (string s in myStringArray) s.Repeat(4).WriteToConsole(); // string is extended by WriteToConsole() Console.ReadKey(); } } Note that extension methods can take parameters simply by defining more than one parameter without the this keyword.). Design Patterns are common building blocks designed to solve everyday software issues. Some basic terms and example of such patterns include what we see in everyday life. Key patterns are the singleton pattern, the factory pattern, and chain of responsibility patterns. Factory Pattern[edit][edit] Hashtable sharedHt = new Hashtable(); public Hashtable Singleton {) The .NET Framework[edit] [edit], Console Programming[edit] Input[edit] Input can be gathered in a similar method to outputing data using the Read() and ReadLine methods of that same System.Console class: using System; public class ExampleClass { public static void Main() { Console.WriteLine("Greetings! What is your name?"); Console.Write("My name is: "); string name = Console.ReadLine(); Console.WriteLine("Nice to meet you, " + name); Console.ReadKey(); } } The above program requests the user's name and displays it back. The final Console.ReadKey() waits for the user to enter a key before exiting the program. Output[edit] The example program below shows a couple of ways to output text: using System; public class HelloWorld { public static void Main() { Console.WriteLine("Hello World!"); // relies on "using System;" Console.Write("This is..."); Console.Write(" my first program!\n"); System.Console.WriteLine("Goodbye World!"); // no "using" statement required } } The above code displays the following text: Hello World! This is... my first program! Goodbye World! That text is output using the System.Console class. The using statement at the top allows the compiler to find the Console class without specifying the System namespace each time it is used. The middle lines use the Write() method, which does not automatically create a new line. To specify a new line, we can use the sequence backslash-n ( \n). If for whatever reason we wanted to really show the \n character instead, we add a second backslash ( \\n). The backslash is known as the escape character in C# because it is not treated as a normal character, but allows us to encode certain special characters (like a new line character). Error[edit] The Error output is used to divert error specific messages to the console. To a novice user this may seem fairly pointless, as this achieves the same as Output (as above). If you decide to write an application that runs another application (for example a scheduler), you may wish to monitor the output of that program - more specifically, you may only wish to be notified only of the errors that occur. If you coded your program to write to the Console.Error stream whenever an error occurred, you can tell your scheduler program to monitor this stream, and feedback any information that is sent to it. Instead of the Console appearing with the Error messages, your program may wish to log these to a file. You may wish to revisit this after studying Streams and after learning about the Process class. Command line arguments[edit] Command line arguments are values that are passed to a console program before execution. For example, the Windows command prompt includes a copy command that takes two command line arguments. The first argument is the original file and the second is the location or name for the new copy. Custom console applications can have arguments as well. c sharp is object based programming language. .net framework is a Microsoft programming language is used to create web application,console application, mobile application. using Sys { public static void Main(string[] args) Console.WriteLine("Last Name: " + args[1]); Console.Read(); } If the above code is compiled to a program called username.exe, it can be executed from the command line using two arguments, e.g. "Bill" and "Gates": C:\>username.exe Bill Gates Notice how the Main() method above has a string array parameter. The program assumes that there will be two arguments. That assumption makes the program unsafe. If it is run without the expected number of command line arguments, it will crash when it attempts to access the missing argument. To make the program more robust, we can check to see if the user entered all the required arguments. using System; public class Test { public static void Main(string[] args) { if(args.Length >= 1) Console.WriteLine(args[0]); if(args.Length >= 2) Console.WriteLine(args[1]); } } Try running the program with only entering your first name or no name at all. The args.Length property returns the total number of arguments. If no arguments are given, it will return zero. You are also able to group a single argument together by using the quote marks ( ""). This is particularly useful if you are expecting many parameters, but there is a requirement for including spaces (e.g. file locations, file names, full names etc.) using System; class Test { public static void Main(string[] args) { for (int index = 0; index < args.Length; index++) { Console.WriteLine((index + 1) + ": " + args[index]); } } } C:\> Test.exe Separate words "grouped together" 1: Separate 2: words 3: grouped together Formatted output[edit] Console.Write() and Console.WriteLine() allow you to output a text string, but also allows writing a string with variable substitution. These two functions normally have a string as the first parameter. When additional objects are added, either as parameters or as an array, the function will scan the string to substitute objects in place of tokens. For example: { int i = 10; Console.WriteLine("i = {0}", i); } The {0} is identified by braces, and refers to the parameter index that needs to be substituted. You may also find a format specifier within the braces, which is preceded by a colon and the specifier in question (e.g. {0:G}). Rounding number example[edit] This is a small example that rounds a number to a string. It is an augmentation for the Math class of C#. The result of the Round method has to be rounded to a string, as significant figures may be trailing zeros that would disappear, if a number format would be used. Here is the code and its call. You are invited to write a shorter version that gives the same result, or to correct errors! The constant class contains repeating constants that should exist only once in the code so that to avoid inadvertant changes. (If the one constant is changed inadvertantly, it is most likely to be seen, as it is used at several locations.) using System; namespace ConsoleApplicationCommons { class Common { /// <summary>Constant of comma or decimal point in German</summary> public const char COMMA = ','; /// <summary>Dash or minus constant</summary> public const char DASH = '-'; /// <summary> /// The exponent sign in a scientific number, or the capital letter E /// </summary> public const char EXPONENT = 'E'; /// <summary>The full stop or period</summary> public const char PERIOD = '.'; /// <summary>The zero string constant used at several places</summary> public const String ZERO = "0"; } // class Common } The Math class is an enhancement to the <math.h> library and contains the rounding calculations. using System; using System.Globalization; using System.IO; using System.Text; namespace ConsoleApplicationCommons { /// <summary> /// Class for special mathematical calculations. /// ATTENTION: Should not depend on any other class except Java libraries! /// </summary> public class Maths { public static CultureInfo invC = CultureInfo.InvariantCulture; /// <summary> /// Value after which the language switches from scientific to double /// </summary> private const double E_TO_DOUBLE = 1E-4; /// <summary> /// Maximal digits after which Convert.ToString(…) becomes inaccurate. /// </summary> private const short MAX_CHARACTERS = 16; /// <summary>The string of zeros</summary> private static StringCharacter to be checked</param> /// <returns> /// true, if it can be a decimal separator in a language, and false /// otherwise. /// </returns> private static bool IsDecimalSeparator(char c) { return ((c == Common.COMMA) || (c == Common.PERIOD)); } /// <summary> /// Determines how many zeros are to be appended after the decimal /// digits. /// </summary> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <param name="d">Rounded number</param> /// <param name="significantsAfter"> /// Significant digits after decimal /// </param> /// <returns>Requested value</returns> private static short CalculateMissingSignificantZeros(char separator, double d, short significantsAfter) { short after = FindSignificantsAfterDecimal(separator, d); short zeros = (short)(significantsAfter - ((after == 0) ? 1 : after)); return (short)((zeros >= 0) ? zeros : 0); } /// <summary> /// Finds the decimal position language-independently. /// </summary> /// <param name="value"> /// Value to be searched for the decimal separator /// </param> /// <returns>The position of the decimal separator</returns> private static short FindDecimalSeparatorPosition(String value) { short separatorAt = (short)value.IndexOf(Common.COMMA); return (separatorAt > -1) ? separatorAt : (short)value.IndexOf(Common.PERIOD); } /// <summary> /// Calculates the number of significant digits (without the sign and /// the decimal separator). /// </summary> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <param name="d">Value where the digits are to be counted</param> /// <param name="significantsAfter"> /// Number of decimal places after the separator /// </param> /// <returns>Number of significant digits</returns> private static short FindSignificantDigits(char separator, double d, short significantsAfter) { if (d == 0) return 0; else { String mantissa = FindMantissa(separator, Convert.ToString(d, invC)); if (d == (long)d) { mantissa = mantissa.Substring(0, mantissa.Length - 1); } mantissa = RetrieveDigits(mantissa); // Find the position of the first non-zero digit: short nonZeroAt = 0; for (; (nonZeroAt < mantissa.Length) && (mantissa[nonZeroAt] == '0'); nonZeroAt++) ; return (short)mantissa.Substring(nonZeroAt).Length; } } /// <summary> /// Finds the significant digits after the decimal separator of a /// mantissa. /// </summary> /// <param name="separator">Language-specific decimal separator</param> /// <param name="d">Value to be scrutinised</param> /// <returns>Number of insignificant zeros after decimal separator. /// </returns> private static short FindSignificantsAfterDecimal(char separator, double d) { if (d == 0) return 1; else { String value = ConvertToString(d); short separatorAt = FindDecimalSeparatorPosition(value); if (separatorAt > -1) value = value.Substring(separatorAt + 1); short eAt = (short) value.IndexOf(Common.EXPONENT); if ((separatorAt == -1) && (eAt == -1)) return 0; else if (eAt > 0) value = value.Substring(0, eAt); long longValue = Convert.ToInt64(value, invC); if (longValue == 0) return 0; else if (Math.Abs(d) < 1) { value = Convert.ToString(longValue, invC); if (value.Length >= 15) { return (byte)Convert.ToString(longValue, invC).Length; } else return (byte)(value.Length); } else { if (value.Length >= 15) return (byte)(value.Length - 1); else return (byte)(value.Length); } } } /// <summary> /// Determines the number of significant digits after the decimal /// separator knowing the total number of significant digits and /// the number before the decimal separator. /// </summary> /// <param name="significantsBefore"> /// Number of significant digits before separator /// </param> /// <param name="significantDigits"> /// Number of all significant digits /// </param> /// <returns> /// Number of significant decimals after the separator /// </returns> private static short FindSignificantsAfterDecimal( short significantsBefore, short significantDigits) { short significantsAfter = (short)(significantDigits - significantsBefore); return (short)((significantsAfter > 0) ? significantsAfter : 0); } /// <summary> /// Determines the number of digits before the decimal point. /// </summary> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <param name="value">Value to be scrutinised</param> /// <returns> /// Number of digits before the decimal separator /// </returns> private static short FindSignificantsBeforeDecimal(char separator, double d) { String value = Convert.ToString(d, invC); // Return immediately, if result is clear: Special handling at // crossroads of floating point and exponential numbers: if ((d == 0) || (Math.Abs(d) >= E_TO_DOUBLE) && (Math.Abs(d) < 1)) { return 0; } else if ((Math.Abs(d) > 0) && (Math.Abs(d) < E_TO_DOUBLE)) return 1; else { short significants = 0; for (short s = 0; s < value.Length; s++) { if (IsDecimalSeparator(value[s])) break; else if (value[s] != Common.DASH) significants++; } return significants; } } /// <summary> /// Returns the exponent part of the double number. /// </summary> /// <param name="d">Value of which the exponent is of interest</param> /// <returns>Exponent of the number or zero.</returns> private static short FindExponent(double d) { return short.Parse(FindExponent(Convert.ToString(d, invC)), invC); } /// <summary> /// Finds the exponent of a number. /// </summary> /// <param name="value"> /// Value where an exponent is to be searched /// </param> /// <returns>Exponent, if it exists, or "0".</returns> private static String FindExponent(String value) { short eAt = (short)(value.IndexOf(Common.EXPONENT)); if (eAt < 0) return Common.ZERO; else { return Convert.ToString (short.Parse(value.Substring(eAt + 1)), invC); } } /// <summary> /// Finds the mantissa of a number. /// </summary> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <param name="value">Value where the mantissa is to be found</param> /// <returns>Mantissa of the number</returns> private static String FindMantissa(char separator, String value) { short eAt = (short)(value.IndexOf(Common.EXPONENT)); if (eAt > -1) value = value.Substring(0, eAt); if (FindDecimalSeparatorPosition(value) == -1) value += ".0"; return value; } /// <summary> /// Retrieves the digits of the value only /// </summary> /// <param name="d">Number</param> /// <returns>The digits only</returns> private static String RetrieveDigits(double d) { double dValue = d; short exponent = FindExponent(d); StringBuilder value = new StringBuilder(); if (exponent == 0) { value.Append(dValue); if (value.Length >= MAX_CHARACTERS) { value.Clear(); if (Math.Abs(dValue) < 1) value.Append("0"); // Determine the exponent for a scientific form: exponent = 0; while (((long)dValue != dValue) && (dValue < 1E11)) { dValue *= 10; exponent++; } value.Append((long)dValue); while ((long)dValue != dValue) { dValue -= (long)dValue; dValue *= 10; value.Append((long)dValue); } } } else { double multiplier = Math.Pow(10, -exponent); for (short s = 0; (s <= 16) && (exponent != 0); s++) { dValue *= multiplier; value.Append((long)dValue); dValue -= (long)dValue; exponent++; multiplier = 10; } } if (value.Length >= MAX_CHARACTERS + 2) value.Length = MAX_CHARACTERS + 2; return RetrieveDigits(value.ToString()); } /// <summary> /// Retrieves the digits of the value only /// </summary> /// <param name="number">Value to be scrutinised</param> /// <returns>The digits only</returns> private static String RetrieveDigits(String number) { // Strip off exponent part, if it exists: short eAt = (short)number.IndexOf(Common.EXPONENT); if (eAt > -1) number = number.Substring(0, eAt); return number.Replace(Convert.ToString(Common.DASH), "").Replace( Convert.ToString(Common.COMMA), "").Replace( Convert.ToString(Common.PERIOD), ""); } /// <summary> /// Inserts the decimal separator at the right place /// </summary> /// <param name="dValue">Number</param> /// <param name="value"> /// String variable, where the separator is to be added. /// </param> private static void InsertSeparator(double dValue, StringBuilder value) { short separatorAt = (short)Convert.ToString((long)dValue).Length; if (separatorAt < value.Length) value.Insert(separatorAt, Common.PERIOD); } /// <summary> /// Calculates the power of the base to the exponent without changing /// the least-significant digits of a number. /// </summary> /// <param name="basis"></param> /// <param name="exponent">basis to power of exponent</param> /// <returns></returns> public static double Power(int basis, short exponent) { return Power((short)basis, exponent); } /// <summary> /// Calculates the power of the base to the exponent without changing /// the least-significant digits of a number. /// </summary> /// <param name="basis"></param> /// <param name="exponent"></param> /// <returns>basis to power of exponent</returns> public static double Power(short basis, short exponent) { if (basis == 0) return (exponent != 0) ? 1 : 0; else { if (exponent == 0) return 1; else { // The Math method power does change the least significant // digits after the decimal separator and is therefore // useless. long result = 1; short s = 0; if (exponent > 0) { for (; s < exponent; s++) result *= basis; } else if (exponent < 0) { for (s = exponent; s < 0; s++) result /= basis; } return result; } } } /// <summary> /// Rounds a number to the decimal places. /// </summary> /// <param name="d">Number to be rounded</param> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <param name="significantsAfter"> /// Number of decimal places after the separator /// </param> /// <returns>Rounded number to the requested decimal places</returns> public static double Round(char separator, double d, short significantsAfter) { if (d == 0) return 0; else { double constant = Power(10, significantsAfter); short dsExponent = FindExponent(d); short exponent = dsExponent; double value = d*constant*Math.Pow(10, -exponent); String exponentSign = (exponent < 0) ? Convert.ToString(Common.DASH) : ""; if (exponent != 0) { exponent = (short)Math.Abs(exponent); value = Round(value); } else { while (FindSignificantsBeforeDecimal(separator, value) < significantsAfter) { constant *= 10; value *= 10; } value = Round(value)/constant; } // Power method cannot be used, as the exponentiated number may // exceed the maximal long value. exponent -= (short)(Math.Sign(dsExponent)* (FindSignificantDigits(separator, value, significantsAfter) - 1)); if (dsExponent != 0) { String strValue = Convert.ToString(value, invC); short separatorAt = FindDecimalSeparatorPosition(strValue); if (separatorAt > -1) { strValue = strValue.Substring(0, separatorAt); } strValue += Common.EXPONENT + exponentSign + Convert.ToString(exponent); value = double.Parse(strValue, invC); } return value; } } /// <summary> /// Rounds a number according to mathematical rules. /// </summary> /// <param name="d">Number to be rounded</param> /// <returns>Rounded number</returns> public static double Round(double d) { return (long)(d + .5); } /// <summary> /// Converts a double value to a string such that it reflects the double /// format (without converting it to a scientific format by itself, as /// it is the case with Convert.ToString(double, invC)). /// </summary> /// <param name="d">Value to be converted</param> /// <returns>Same format value as a string</returns> public static String ConvertToString(double d) { double dValue = d; StringBuilder value = new StringBuilder(); if (Math.Sign(dValue) == -1) value.Append(Common.DASH); if ((dValue > 1E-5) && (dValue < 1E-4)) { value.Append("0"); while ((long)dValue == 0) { dValue *= 10; if (dValue >= 1) break; value.Append(Convert.ToString((long)dValue)); } } short exponent = FindExponent(d); if (exponent != 0) { value.Append(RetrieveDigits(dValue)); InsertSeparator(dValue, value); value.Append(Common.EXPONENT); value.Append(exponent); } else { value.Append(RetrieveDigits(dValue)); InsertSeparator(dValue, value); if (value.Length > MAX_CHARACTERS + 3) { value.Length = MAX_CHARACTERS + 3; } } return value.ToString(); } /// <summary> /// Rounds to a fixed number of significant digits. /// </summary> /// <param name="d">Number to be rounded</param> /// <param name="significantDigits"> /// Requested number of significant digits /// </param> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <returns>Rounded number</returns> public static String RoundToString(char separator, double d, short significantDigits) { // Number of significants that *are* before the decimal separator: short significantsBefore = FindSignificantsBeforeDecimal(separator, d); // Number of decimals that *should* be after the decimal separator: short significantsAfter = FindSignificantsAfterDecimal( significantsBefore, significantDigits); // Round to the specified number of digits after decimal separator: double rounded = Maths.Round(separator, d, significantsAfter); String exponent = FindExponent(Convert.ToString(rounded, invC)); String mantissa = FindMantissa(separator, Convert.ToString(rounded, invC)); double dMantissa = double.Parse(mantissa, invC); StringBuilder result = new StringBuilder(mantissa); // Determine the significant digits in this number: short significants = FindSignificantDigits(separator, dMantissa, significantsAfter); // Add lagging zeros, if necessary: if (significants <= significantDigits) { if (significantsAfter != 0) { result.Append(strZeros.Substring(0, CalculateMissingSignificantZeros(separator, dMantissa, significantsAfter))); } else { // Cut off the decimal separator & after decimal digits: short decimalValue = (short) result.ToString().IndexOf( Convert.ToString(separator)); if (decimalValue > -1) result.Length = decimalValue; } } else if (significantsBefore > significantDigits) { d /= Power(10, (short)(significantsBefore - significantDigits)); d = Round(d); short digits = (short)(significantDigits + ((d < 0) ? 1 : 0)); String strD = d.ToString().Substring(0, digits); result.Length = 0; result.Append(strD + strZeros.Substring(0, significantsBefore - significantDigits)); } if (short.Parse(exponent, invC) != 0) { result.Append(Common.EXPONENT + exponent); } return result.ToString(); } // public static String RoundToString(…) /// <summary> /// Rounds to a fixed number of significant digits. /// </summary> /// <param name="separator"> /// Language-specific decimal separator /// </param> /// <param name="significantDigits"> /// Requested number of significant digits /// </param> /// <param name="value"></param> /// <returns></returns> public static String RoundToString(char separator, float value, int significantDigits) { return RoundToString(separator, (double)value, (short)significantDigits); } } // public class Maths }. using System; using System.Collections.Generic; namespace ConsoleApplicationCommons { class TestCommon { /// <summary> /// Test for the common functionality /// </summary> /// <param name="args"></param> static void Main(string[] args) { // Test rounding List<double> values = new List<double>(); values.Add(0.0); AddValue(1.4012984643248202e-45, values); AddValue(1.999999757e-5, values); AddValue(1.999999757e-4, values); AddValue(1.999999757e-3, values); AddValue(0.000640589, values); AddValue(0.3396899998188019, values); AddValue(0.34, values); AddValue(7.07, values); AddValue(118.188, values); AddValue(118.2, values); AddValue(123.405009, values); AddValue(30.76994323730469, values); AddValue(130.76994323730469, values); AddValue(540, values); AddValue(12345, values); AddValue(123456, values); AddValue(540911, values); AddValue(9.223372036854776e56, values); const short SIGNIFICANTS = 5; foreach (double element in values) { Console.Out.WriteLine("Maths.Round('" + Common.PERIOD + "', " + Convert.ToString(element, Maths.invC) + ", " + SIGNIFICANTS + ") = " + Maths.RoundToString (Common.PERIOD, element, SIGNIFICANTS)); } Console.In.Read(); } /// <summary> /// Method that adds a negative and a positive value /// </summary> /// <param name="d"></param> /// <param name="values"></param> private static void AddValue(double d, List<double> values) { values.Add(-d); values.Add(d); } } // class TestCommon } The results of your better code should comply with the result I got: Maths.Round('.', 0, 5) = 0.00000 Maths.Round('.', -1.40129846432482E-45, 5) = -1.4012E-45 Maths.Round('.', 1.40129846432482E-45, 5) = 1.4013E-45 Maths.Round('.', -1.999999757E-05, 5) = -1.9999E-5 Maths.Round('.', 1.999999757E-05, 5) = 2.0000E-5 Maths.Round('.', -0.0001999999757, 5) = -0.00019999 Maths.Round('.', 0.0001999999757, 5) = 0.00020000 Maths.Round('.', -0.001999999757, 5) = -0.0019999 Maths.Round('.', 0.001999999757, 5) = 0.0020000 Maths.Round('.', -0.000640589, 5) = -0.00064058 Maths.Round('.', 0.000640589, 5) = 0.00064059 Maths.Round('.', -0.339689999818802, 5) = -0.33968 Maths.Round('.', 0.339689999818802, 5) = 0.33969 Maths.Round('.', -0.34, 5) = -0.33999 Maths.Round('.', 0.34, 5) = 0.34000 Maths.Round('.', -7.07, 5) = -7.0699 Maths.Round('.', 7.07, 5) = 7.0700 Maths.Round('.', -118.188, 5) = -118.18 Maths.Round('.', 118.188, 5) = 118.19 Maths.Round('.', -118.2, 5) = -118.19 Maths.Round('.', 118.2, 5) = 118.20 Maths.Round('.', -123.405009, 5) = -123.40 Maths.Round('.', 123.405009, 5) = 123.41 Maths.Round('.', -30.7699432373047, 5) = -30.769 Maths.Round('.', 30.7699432373047, 5) = 30.770 Maths.Round('.', -130.769943237305, 5) = -130.76 Maths.Round('.', 130.769943237305, 5) = 130.77 Maths.Round('.', -540, 5) = -539.99 Maths.Round('.', 540, 5) = 540.00 Maths.Round('.', -12345, 5) = -12344 Maths.Round('.', 12345, 5) = 12345 Maths.Round('.', -123456, 5) = -123450 Maths.Round('.', 123456, 5) = 123460 Maths.Round('.', -540911, 5) = -540900 Maths.Round('.', 540911, 5) = 540910 Maths.Round('.', -9.22337203685478E+56, 5) = -9.2233E56 Maths.Round('.', 9.22337203685478E+56, 5) = 9.2234E56 If you are interested in a comparison with C++, please compare it with the same example there. If you want to compare C# with Java, take a look at the rounding number example there. System.Windows.Forms[edit]Pagethatcontains a collection of TabPageobjects. - DataGrid - data grid/table view Form class[edit][edit][edit] } } Lists[edit][edit][edit][edit][edit] A dictionary is a collection of values with keys. The values can be very complex, yet searching the keys is still fast. The non-generic class is Hashtable, while the generic one is Dictionary<TKey, TValue>. Threads are tasks that can run concurrently to other threads and can share data. When your program starts, it creates a thread for the entry point of your program, usually a Main function. So, you can think of a "program" as being made up of threads. The .NET Framework allows you to use threading in your programs to run code in parallel to each other. This is often done for two reasons: - If the thread running your graphical user interface performs time-consuming work, your program may appear to be unresponsive. Using threading, you can create a new thread to perform tasks and report its progress to the GUI thread. - On computers with more than one CPU or CPUs with more than one core, threads can maximize the use of computational resources, speeding up tasks. The Thread class[edit] The System.Threading.Thread class exposes basic functionality for using threads. To create a thread, you simply create an instance of the Thread class with a ThreadStart or ParameterizedThreadStart delegate pointing to the code the thread should start running. For example: using System; using System.Threading; public static class Program { private static void SecondThreadFunction() { while (true) { Console.WriteLine("Second thread says hello."); Thread.Sleep(1000); // pause execution of the current thread for 1 second (1000 ms) } } public static void Main() { Thread newThread = new Thread(new ThreadStart(SecondThreadFunction)); newThread.Start(); while (true) { Console.WriteLine("First thread says hello."); Thread.Sleep(500); // pause execution of the current thread for half a second (500 ms) } } } You should see the following output: First thread says hello. Second thread says hello. First thread says hello. First thread says hello. Second thread says hello. ... Notice that the while keyword is needed because as soon as the function returns, the thread exits, or terminates. ParameterizedThreadStart[edit] The void ParameterizedThreadStart(object obj) delegate allows you to pass a parameter to the new thread: using System; using System.Threading; public static class Program { private static void SecondThreadFunction(object param) { while (true) { Console.WriteLine("Second thread says " + param.ToString() + "."); Thread.Sleep(500); // pause execution of the current thread for half a second (500 ms) } } public static void Main() { Thread newThread = new Thread(new ParameterizedThreadStart(SecondThreadFunction)); newThread.Start(1234); // here you pass a parameter to the new thread while (true) { Console.WriteLine("First thread says hello."); Thread.Sleep(1000); // pause execution of the current thread for a second (1000 ms) } } } The output is: First thread says hello. Second thread says 1234. Second thread says 1234. First thread says hello. ... Sharing Data[edit] Although we could use ParameterizedThreadStart to pass parameter(s) to threads, it is not typesafe and is clumsy to use. We could exploit anonymous delegates to share data between threads, however: using System; using System.Threading; public static class Program { public static void Main() { int number = 1; Thread newThread = new Thread(new ThreadStart(delegate { while (true) { number++; Console.WriteLine("Second thread says " + number.ToString() + "."); Thread.Sleep(1000); } })); newThread.Start(); while (true) { number++; Console.WriteLine("First thread says " + number.ToString() + "."); Thread.Sleep(1000); } } } Notice how the body of the anonymous delegate can access the local variable number. Asynchronous Delegates[edit] Using anonymous delegates can lead to a lot of syntax, confusion of scope, and lack of encapsulation. However with the use of lambda expressions, some of these problems can be mitigated. Instead of anonymous delegates, you can use asynchronous delegates to pass and return data, all of which is type safe. It should be noted that when you use an asynchronous delegate, you are actually queuing a new thread to the thread pool. Also, using asynchronous delegates forces you to use the asynchronous model. using System; public static class Program { delegate int del(int[] data); public static int SumOfNumbers(int[] data) { int sum = 0; foreach (int number in data) { sum += number; } return sum; } public static void Main() { int[] numbers = new int[] { 1, 2, 3, 4, 5 }; del func = SumOfNumbers; IAsyncResult result = func.BeginInvoke(numbers, null, null); // I can do stuff here while numbers is being added int sum = func.EndInvoke(result); sum = 15 } } Synchronization[edit] In the sharing data example, you may have noticed that often, if not all of the time, you will get the following output: First thread says 2. Second thread says 3. Second thread says 5. First thread says 4. Second thread says 7. First thread says 7. One would expect that at least, the numbers would be printed in ascending order! This problem arises because of the fact that the two pieces of code are running at the same time. For example, it printed 3, 5, then 4. Let us examine what may have occurred: - After "First thread says 2", the first thread incremented number, making it 3, and printed it. - The second thread then incremented number, making it 4. - Just before the second thread got a chance to print number, the first thread incremented number, making it 5, and printed it. - The second thread then printed what numberwas before the first thread incremented it, that is, 4. Note that this may have occurred due to console output buffering. The solution to this problem is to synchronize the two threads, making sure their code doesn't interleave like it did. C# supports this through the lock keyword. We can put blocks of code under this keyword: using System; using System.Threading; public static class Program { public static void Main() { int number = 1; object numberLock = new object(); Thread newThread = new Thread(new ThreadStart(delegate { while (true) { lock (numberLock) { number++; Console.WriteLine("Second thread says " + number.ToString() + "."); } Thread.Sleep(1000); } })); newThread.Start(); while (true) { lock (numberLock) { number++; Console.WriteLine("First thread says " + number.ToString() + "."); } Thread.Sleep(1000); } } } The variable numberLock is needed because the lock keyword only operates on reference types, not value types. This time, you will get the correct output: First thread says 2. Second thread says 3. Second thread says 4. First thread says 5. Second thread says 6. ... The lock keyword operates by trying to gain an exclusive lock on the object passed to it ( numberLock). It will only release the lock when the code block has finished execution (that is, after the }). If an object is already locked when another thread tries to gain a lock on the same object, the thread will block (suspend execution) until the lock is released, and then lock the object. This way, sections of code can be prevented from interleaving. Thread.Join()[edit] The Join method of the Thread class allows a thread to wait for another thread, optionally specifying a timeout: using System; using System.Threading; public static class Program { public static void Main() { Thread newThread = new Thread(new ThreadStart(delegate { Console.WriteLine("Second thread reporting."); Thread.Sleep(5000); Console.WriteLine("Second thread done sleeping."); })); newThread.Start(); Console.WriteLine("Just started second thread."); newThread.Join(1000); Console.WriteLine("First thread waited for 1 second."); newThread.Join(); Console.WriteLine("First thread finished waiting for second thread. Press any key."); Console.ReadKey(); } } The output is: Just started second thread. Second thread reporting. First thread waited for 1 second. Second thread done sleeping. First thread finished waiting for second thread. Press any key. The .NET Framework currently supports calling unmanaged functions and using unmanaged data, a process called marshalling. This is often done to use Windows API functions and data structures, but can also be used with custom libraries. GetSystemTimes[edit] A simple example to start with is the Windows API function GetSystemTimes. It is declared as: BOOL WINAPI GetSystemTimes( __out_opt LPFILETIME lpIdleTime, __out_opt LPFILETIME lpKernelTime, __out_opt LPFILETIME lpUserTime ); LPFILETIME is a pointer to a FILETIME structure, which is simply a 64-bit integer. Since C# supports 64-bit numbers through the long type, we can use that. We can then import and use the function as follows: using System; using System.Runtime.InteropServices; public class Program { [DllImport("kernel32.dll")] static extern bool GetSystemTimes(out long idleTime, out long kernelTime, out long userTime); public static void Main() { long idleTime, kernelTime, userTime; GetSystemTimes(out idleTime, out kernelTime, out userTime); Console.WriteLine("Your CPU(s) have been idle for: " + (new TimeSpan(idleTime)).ToString()); Console.ReadKey(); } } Note that the use of out or ref in parameters automatically makes it a pointer to the unmanaged function. GetProcessIoCounters[edit] To pass pointers to structs, we can use the out or ref keyword: using System; using System.Runtime.InteropServices; public class Program { struct IO_COUNTERS { public ulong ReadOperationCount; public ulong WriteOperationCount; public ulong OtherOperationCount; public ulong ReadTransferCount; public ulong WriteTransferCount; public ulong OtherTransferCount; } [DllImport("kernel32.dll")] static extern bool GetProcessIoCounters(IntPtr ProcessHandle, out IO_COUNTERS IoCounters); public static void Main() { IO_COUNTERS counters; GetProcessIoCounters(System.Diagnostics.Process.GetCurrentProcess().Handle, out counters); Console.WriteLine("This process has read " + counters.ReadTransferCount.ToString("N0") + " bytes of data."); Console.ReadKey(); } } Keywords[edit]. keyword break is used to exit out of a loop or switch block. break. The keyword continue can be used inside any loop in a method. Its affect is to end the current loop iteration and proceed to the next one. If executed inside a for, end-of-loop statement is executed (just like normal loop termination). . The default keyword can be used in the switch statement or in generic code:[1] - The switch statement: Specifies the default label. - Generic code: Specifies the default value of the type parameter. This will be nullfor: General[edit][edit]. The in keyword identifies the collection to enumerate in a foreach loop. lock keyword allows a section of code to exclusively use a resource, a feature useful in multi-threaded applications. If a lock to the specified object is already held when a piece of code tries to lock the object, the code's thread is blocked until the object is available. using System; using System.Threading; class LockDemo { private static int number = 0; private static object lockObject = new object(); private static void DoSomething() { while (true) { lock (lockObject) { int originalNumber = number; number += 1; Thread.Sleep((new Random()).Next(1000)); // sleep for a random amount of time number += 1; Thread.Sleep((new Random()).Next(1000)); // sleep again Console.Write("Expecting number to be " + (originalNumber + 2).ToString()); Console.WriteLine(", and it is: " + number.ToString()); // without the lock statement, the above would produce unexpected results, // since the other thread may have added 2 to the number while we were sleeping. } } } public static void Main() { Thread t = new Thread(new ThreadStart(DoSomething)); t.Start(); DoSomething(); // at this point, two instances of DoSomething are running at the same time. } } The parameter to the lock statement must be an object reference, not a value type: class LockDemo2 { private int number; private object obj = new object(); public void DoSomething() { lock (this) // ok { ... } lock (number) // not ok, number is not a reference { ... } lock (obj) // ok, obj is a reference { ... } } } clause, a catch clause, or both. finally. The ulong keyword is used in field, method, property, and variable declarations and in cast and typeof operations as an alias for the .NET Framework structure System.UInt64. That is, it represents a 64-bit unsigned integer whose value ranges from 0 to 18,446,744,073,709,551,615. Contents - 1 Introduction - 2 Language Basics - 2.1 Reasoning - 2.2 Conventions - 2.2.1 Namespace - 2.2.2 Assemblies - 2.2.3 Classes and Structures - 2.2.4 Exception Classes - 2.2.5 Interfaces - 2.2.6 Functions - 2.2.7 Properties and Public Member Variables - 2.2.8 Parameters and Procedure-level Variables - 2.2.9 Class-level Private and Protected Variables - 2.2.10 Controls on Forms - 2.2.11 Constants - 2.3 Example - 2.4 Statements - 2.5 Statement blocks - 2.6 Comments - 2.7 Case sensitivity - 2.8 Fields, local variables, and parameters - 2.9 Types - 2.10 Text & variable example - 2.11 Scope and extent - 2.12 Arithmetic - 2.13 Logical - 2.14 Bitwise shifting - 2.15 Relational - 2.16 Assignment - 2.17 Short-hand Assignment - 2.18 Type information - 2.19 Pointer manipulation - 2.20 Overflow exception control - 2.21 Others - 2.22 Enumerations - 2.23 Structs - 2.24 Arrays - 2.25 Conditional statements - 2.26 Iteration statements - 2.27 Jump statements - 2.28 Introduction - 2.29 Overview - 2.30 Examples - 2.31 Re-throwing exceptions - 3 Classes - 3.1 Nested namespaces - 3.2 Methods - 3.3 Constructors of classes - 3.4 Finalizers (Destructors) - 3.5 Properties - 3.6 Indexers - 3.7 Events - 3.8 Operator overloading - 3.9 Structures - 3.10 Static classes - 3.11 References - 3.12 Introduction - 3.13 Reference and Value Types - 3.14 Object basics - 3.15 Protection Levels - 3.16 References - 4 Advanced Concepts - 4.1 Explanation By Analogy - 4.2 Inheritance - 4.3 Subtyping Inheritance - 4.4 Virtual Methods - 4.5 Constructors - 4.6 Inheritance keywords - 4.7 References - 4.8 Additional details - 4.9 Introduction - 4.10 Delegates - 4.11 Anonymous delegates - 4.12 Events - 4.13 Implementing methods - 4.14 Partial Classes - 4.15 Generic classes - 4.16 Generic interfaces - 4.17 Generic methods - 4.18 Type constraints - 4.19 Notes - 4.20 Introduction - 4.21 Factory Pattern - 4.22 Singleton - 5 The .NET Framework - 5.1 Introduction - 5.2 Background - 5.3 Console Programming - 5.4 System.Windows.Forms - 5.5 Form class - 5.6 Events - 5.7 Controls - 5.8 Lists - 5.9 LinkedLists - 5.10 Queues - 5.11 Stacks - 5.12 Hashtables and dictionaries - 5.13 The Thread class - 5.14 Sharing Data - 5.15 Asynchronous Delegates - 5.16 Synchronization - 5.17 GetSystemTimes - 5.18 GetProcessIoCounters - 6 Keywords <.
https://en.wikibooks.org/wiki/C_Sharp_Programming/Print_version
CC-MAIN-2016-36
en
refinedweb
Mybinaryoption ssignals Lenneberg, Biological Foundations of Language. Mattson, Mybinaryoptionssignals com. Danion J. The energy conservation equation mybinaryoptionssignals com also mybinaryoptionssignals com in a similar manner. The patients state of awareness is described mybinaryoptionssignals com adjectives such as alert, drowsy, stu- porous, confused, and so forth. We just need to figure out the permutation that will let us recover the original sequence. In an analysis of language abilities, Maureen Dennis and Harry Whitaker found that, unlike right- hemisphere removals. As Weisstein pointed out in 1971, psychology is mens fantasy. Rev. Optical Mybinaryoptionssignals com. I would like to thank these orga- mzatlons for their support. He was hyperactive and showed no emotion but anger, which he expressed in temper tantrums. tjJ(t) LWkl1,k(t) (15. Ehrlich, M. (2001). include iostream using namespace std; int main() { double z 1. For decades, issues, and techniques. For example, 60 Mybinaryoptionssignals com enhanced, 603 Pierce, J. (1997). This means that if k1, k2, and k3 are mybinaryoptionssignals com in the same direction and Eq. The case of J.Rivenson, A. Public. Loo H.and Alt, F. Spearman insisted on the study of individual differences as being the main way in which mybinaryoptionssignals com understand intelligence.Eghrari, H. 931012. Print(calendar.Vol. These could involve implementing classroom strategies and rule systems to provide structure, because thereactionstheycatalyzeareintermolecular andtheribozymesthemselves arenotconsumedduring theprocess,therearesomedtfferences. 10 Projects and Problems 353 6. 8E-06 level of the pPAHs (1,359 ngm3) throughout a work period, their mybinaryoptionssignals com risk will be estimated as 4. As noted in the figure, many but not all growth factor receptors exhibit tyrosine protein mybinaryoptionssignals com activity in the cytoplas- mic domain, such mybinaryoptionssignals com being activated by interaction with the ligand, as was previously dis- cussed com Chapter 7. The second problem is the achievement gap. This distinction is reminiscent of the difference between knowing what a stimulus is and knowing where mybinaryoptionssignals com is, 1987, binary options daily signals. Mybinaryoptionssignals com University of California, S. Krontiris, T. In this region of ξ the other factor in mybinaryoptionssignals com integrand can be expanded as 1 1 1 ξ1 1 1 ξ ξ2 ξ3 mybinaryoptionssignals com. After 30 years of research into the assessment pro- cess, J. Miller. The ABORT flag will also be set to indicate that the image production was aborted. This con- clusion leads directly to the idea that the right hemisphere normally plays a major role in the production of strong emotions, especially in emotions re- garded as negative, such as fear and anger. URLs strategy binary options com ubiquitous; every browser uses them to identify information mybinaryptionssignals the Web. 2(1 iO. When you are shifting right, the top (leftmost) bits exposed by the right shift are filled in with the previous contents of the top bit. 4 12. These methods may all throw an IllegalArgumentException if start is greater than end, or an ArrayIndexOutOfBoundsException if start or end is out of bounds. 229) (3. ) Mybinaryoptionssignals com third category refers to work policies. Inserting these last two results in Eq. The effects of selenium administration and selenium deficiency on carcinogenesis in several different types of tissues are seen in Table 8. Now, each time the slider changes, the setValue( ) method of the scroll bar is called with an argument supplied by the getValue( ) method of the slider. Equation (10. 5 t 9. At the culmination of their athletic career, ath- letes are usually at their most resourceful and their career demands are the highest. Walsh D, T4 mybinaryoptionssignals com kmase Binary option put call parity UpL) (Glbco-BRL), and 10X T4 polynucleotlde kmase buffer (Glbco-BRL) are used for the 5 end- labeling of the Rz stem probe. (1976) Scales for physical and social anhedonia. Lee, Y. Applet. (14. The goal of long-term treatment is to move ymbinaryoptionssignals patient from a poorer outcome to the best possible outcome. Positive Risk Factors mybinaryoptionssignals com 3. 118) Page 65 56. As the above discussion in connection with the solution of the time-dependent Schro ̈ dinger equation showed, the key to its solution is that it must be possible to expand an arbitrary state function in terms of the eigen functions of the Hamiltonian. Totowa, NJ Humana Press, 1997. polynucleotide kinase. Horster and Ettlinger trained 155 rhesus monkeys mybinaryoptionssignals com make a tactile re- sponse in a mybinaryoptiлnssignals in which the subjects had to discriminate a cylinder from a sphere. In urban regions, artificial water flow in water supplies and sewer systems coexists with the natural water flow as shown in Fig. The consolidation theory explains why older memories tend to be mybinaryoptiтnssignals in cases of hippocampal damage (they have been mybinarypotionssignals else- where for storage), there is no doubt that cultural mybinaryoptionssignals com strongly affect mybinaryoptionssignasl expression of arousal by so-called display rules. The most influential example of this type of theory was that proposed by Adorno and colleagues in 1950. Subsequent studies focused on individual personality traits and their rela- mybinaryoptionssignals com with accident involvement. Areas of the cortex that represent the midline of the body-such as the central meridian of the visual fields, in which stable patients were switched to either ziprasidone or placebo, the former had a significantly higher efficacy than placebo to reduce the risk mybinaryoptionssignals com impending mybinaryoptionssignals com 165.8213231324, 1991. Interventions must consider the elders standard of living. 1136. ), Handbook of neurop- sychology (Vol. Only the rectangular property appears. 5 Effects of ELF Electromagnetic Fields on Gene Transcription in Vitro and in Vivo System In Vitro Lymphoblastoid human cell line Hela S3 cells HL-60 cell line (human leukemia) B-lineage lymphoid cells (human) Mybinaryoptionssignals com lymphoma CEM cells In Vivo Liver regeneration (rat) Mammary tissue (rat) Mouse liver Mybinaryoptionssignals com 384 milligauss 0. Primary prevention efforts are needed to make communities safe for sexual minorities and to mini- mize the cost to society of the discrimination and prej- udice that has long stigmatized lesbians. Japanese and United States preschool childrens responses to conflict and distress. Inaddition,foodgrade sodium bicarbonate meets the requirements specified by the U. Insiders can facilitate socialization by providing newcomers with additional information, giving advice, and government franchise. The general theoretical framework provided at the outset opens a broader perspective. Reciprocal Peer Tutoring Reciprocal peer tutoring (RPT) in mathematics, developed by Fantuzzo and associates at the University of Pennsylvania, com- bines self-management techniques and group contingen- cies within mybinary optionssignals peer tutoring format. Here the subject listened to spoken words, resulting in increased activity localized to the auditory cortex.. Shaffer, as well as those with sen- sory disabilities, often benefit from mybinaryoptionssignals com binary options trading information to achieve academic success. Increases in the price level push up interest rates, which usually will depress interest-sensitive spending. This matenal is extracted with buffer-saturated phenol and then with chloroform- isoamyl alcohol (24 1). Denman, R. realistic empathy Taking the perspective of the adversary to gain a realistic understanding of the other partys thoughts and feelings. Motivators, transient mybinaryoptionssignals com of liver enzymes, and seizures. The symbol L is the Laplace transformation, which acts mybinaryoptionssignals com functions f f (t) and generates a new function, F (s) Lf (t). and one hemisphere of the brain to mybinaryoptionssignals com other. However,ifthewickburnstoo Afterthewaxbaseisheatedintoa clear,near-liquidstate,itisfiltered toremoveanyimpuritiesthatmight interferewiththefinishedcandles burningproces. Adding these last two expressions gives (A. Mybinaryoptionssignals com visual cortex (region V1) CHAPTER 8 ORGANIZATION OF THE SENSORY SYSTEMS Learning binary option trading 1 Information from the right side of the visual field. 2 is now used to analyze the stability of a heavy fluid supported by binary options demo light fluid mybinaryoptionssignals com the situation where no constraint exists at the interface. Mybinaryoptionssignals com NSAIDs appear to exert their effects by an Table 8. Page 178 Researchers discover the receptive fields for various parts of the brain by mybinaryoptionssignals com the activity of a cell or mybinaryoptionssignals com of the brain while moving an appro- priate stimulus around until the cell or brain region responds. Israel, B. INTRODUCTION Mybinaryoptionssignals com. Side-impact air bags are another possibility that would work mybinaryoptionssignals com to driver- and pas- senger-sideairbags. Psychiatry, 148 12461248. (1999). 16 0. Mol. forName("java. Putting it all together, the following line of code creates a BufferedReader that is connected to the keyboard BufferedReader br new BufferedReader(new InputStreamReader(System. Niki et al. 9 Assignments 1. 277. While exact, mybinaryoptionssignls implicit nature of this solution mybinaryoptionssignals com the essential physics. Tolerance and Diversity The educational system in the United States continues mybinaryoptionssignals com experience an increase in culturally diverse students and families. Also used to define a block of code, for classes, methods, and local scopes. DYNAMICS OF INTERUNIT CONNECTIONS Interunit connections in a connectionist network may be fixed or modifiable. A broader matter, however, is the difficulty of binary options vs stock trading with residual cognitive mybinaryoptionssignals com outside the clinic. 181. The work environment may mybiinaryoptionssignals seen as a mybinaryptionssignals through which feelings of boredom are reduced or intensified. G-7127) store powder at 20°C; 25 mM stock in water; pH should be 6. Mybinaryoptionssignals com M. 108°. A central aspect of ARCAM is the focus of driver atten- tion. To earn promotions, they must envi- sion the career mybinaryoptionssignals com mapped by their organization and perform the behaviors needed to get mybinaryoptionssignals com, which mybinaryoptionssignaals clude showing initiative, taking on more responsibility, engaging in on-the-job training, and enrolling in trading binary options with nadex tinuing education. A simple reliable method for creating false mem- ories for words was developed by Deese, Roediger. Option compare binary vba excel Mybinaryoptionssignals com. Thus the scattering amplitude matrix assumes following simple form in the scattering plane system of coordinates Coom t fk (1. Bernstein (Eds. 5, and 0. The valueofalcement-basedstructures inthe Binary options brokers indonesia States is in the trillions of dollars- roughlycommensurate withtheanticipated cost of repairing those structures over the next twenty years. No significant mybiaryoptionssignals in change in BPRS scores Psychosocial functioning No significant differences in changes in self-esteem, social activities or quality of life mybinaryoptionssignals com groups over 2 years Page 223 204 SCHIZOPHRENIA They report that these tests were almost uniformly nonsignificant, binary option strangle a consistent lack of differences in the degree of improvement 10. Emotions affect perfor- mance; however, changes in the ongoing performance process and intermediate (and final) outcomes produce a shift in a personal meaning of the situation, resulting in a shift in emotion content and intensity. normative conflict Conflict between parties where one party perceives mybinaryoptionssiggnals the other party has exhibited behavior mybinaryoptionssignals com is mybinaryoptionssignals com to be wrong. 71828 The value of f7 is undefined The value of f8 is mybinaryoptionssigals The value of f9 is undefined There are 4 ordered pairs held in f They are (-3,0. As seen with Alzheimers disease, when the hippocampus is severely damaged, profound memory loss is mybinaryoptionssignals com. This is an important feature, because it allows you to build distributed applications. Oxford Blackwell. This range of values of Dis divided into four intervals, each being represented by 2 bits. Narratives (as self-stories told by athletes and coaches) can describe concrete performance situations and can identify automatic thoughts and emotional responses. Nice girls are not supposed to take mybinaryoptionssig nals or assertive roles in initiating sexual activities, and many girls feel uncom- fortable if they do act too sexually knowledgeable or eager. (1997) The consumer recovery vision will it alleviate family burden.Trading binary option indonesia
http://newtimepromo.ru/mybinaryoptionssignals-com-2.html
CC-MAIN-2016-36
en
refinedweb
Strange cross platform inconsistencies: Is it possible to wrap std::vector<int> as a normal Python List on windows? so I can do for i in anIntVector: print i rather than it = anIntVector.begin() while it != anIntVector.end(): print it.value() I got this to happen on linux easily using same interface file as I do on windows. using swig2.0.4 on both OS. easy enough to write a generator or whatever, but dont understand why it works on linux and not windows! /* TheProject.i */ %{ /* Put header files here or function declarations like below */ #include "Metrics.h" #include "Colors.h" %} /* sure i don't need half of these! */ %include windows.i %include stl.i %include typemaps.i %include std_vector.i %typemap(csbase) ColorNames "uint" /* instantiate the required template specializations */ namespace std { %template(IntVector) vector<int>; %template(DoubleVector) vector<double>; } %include "Metrics.h" %include "Colors.h" (Interface also used for C#) I also have the types IntVector and DoubleVector typedef-ed in the C++ source, in case that could cause any issue. Cheers in advance! Chris I think swig is excellent.what confuse me is that there is a DISOWN typemap, but there is no OWN typemap or ACQUIRE.I didn't find it in the documentation. Maybe I can write a function like this: %newobject acquire; Klass* acquire(Klass* a); It returns self. But when I track objects, it may bug. Besides, I need a function to ensure that the object passed has ownership(or not have).I didn't find it in the doc. HELP! I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=201106&viewday=18
CC-MAIN-2016-36
en
refinedweb
EZNamespaceExtensions.Net v2013 Adds Context Menus, Thumbnail, Icons, Property Sheet Support - | - - - - - - - Read later My Reading List Logic NP Software has released EZNamespaceExtensions.Net v2013 which enables you to develop shell or Windows Explorer namespace extensions in .NET. It employs an easy to use object model to enable the shell namespace extension to integrate smoothly and seamlessly into Windows Explorer. The look, feel and functionality of the extensions will be similar to that of the active Windows Explorer folders. EZNamespaceExtensions.Net provides support for the following Windows Explorer features and functionalities - Folder and non-folder items - Multi level sub folders - Automatic sub folder navigation - Context menus, background context menus for items - Details and Report view support with multiple columns - Thumbnail view, Cut, Copy and Paste - Group view and category - Icons and Overlay icons for items - Automatic renaming functionality for items - Property sheet and Infotip for items EZNamespaceExtensions also provides an ability to integrate menu items to the main Windows Explorer frame menu and add buttons to the Windows Explorer toolbar with drag-drop support from, to and within the namespace extension. It also enables you to present items in the namespace extension as files and folders. In order to work with EZNamespaceExtensions.Net v2013, you need to make use of Shell Namespace Extension project template integrated with Visual Studio 2012. It automatically generates the required code and files for the development of a namespace extension. The project can be then modified, built and registered using the RegisterExtensionDotNet40.exe utility, which can be found inside the installation folder. Moreover, you should restart Windows Explorer using the RestartExplorer.exe utility in order to view the namespace extension and will also be visible under My Computer. InfoQ spoke to Himangi G, Senior Developer, LogicNP Software to know more about the possibilities of EZNamespaceExtensions.Net. InfoQ: What is the need for the development of Windows Explorer namespace extensions in .NET? Developing Windows Explorer namespace extensions in .NET allows developers to leverage their existing knowledge and skills of the .NET technology and base class library as well as use their favorite .NET programming language, be it C# or VB.NET. Using .NET to develop namespace extensions allow developers to use the vast base class library (BCL) to their advantage. InfoQ: Can you share with us the type of applications that can be developed using EZNamespaceExtensions.Net v2013? The following is just some of the types of Windows Explorer-integrated and File Open/Save Dialog-integrated software that you can developed using EZNamespaceExtensions.Net 2013: - Document Management Software - Content Management Software - Information Management Software - Virtual Drives - FTP programs - Database Browser or Front-End - Protocol Handlers InfoQ: Is it necessary to purchase a license of EZNamespaceExtensions.Net v2013 when distributing the application? No, EZNamespaceExtensions.Net v2013 includes royalty-free redistribution licenses, so you can redistribute the EZNamespaceExtensions.Net binary with your developed namespace extension absolutely free. InfoQ: How easy it is to develop an application with EZNamespaceExtensions.Net v2013? EZNamespaceExtensions.Net allows you to jumpstart namespace extension development with Visual Studio IDE Project wizards. It has a simple, well designed and thoroughly tested API framework which allows you to develop namespace extensions in hours/days instead of weeks/months. It has full feature support allowing your namespace extension to integrate seamlessly into Windows Explorer and File Open/Save dialogs of all applications. It includes a registration utility to easily deploy your developed namespace extensions. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/news/2013/04/eznamespaceextensions-net-v2013
CC-MAIN-2016-36
en
refinedweb
/* version.c -- distribution and version numbers. */ /* Copyright (C) 1989 <stdio.h> #include "stdc.h" #include "version.h" #include "patchlevel.h" #include "conftypes.h" #include "bashintl.h" extern char *shell_name; /* Defines from version.h */ const char *dist_version = DISTVERSION; int patch_level = PATCHLEVEL; int build_version = BUILDVERSION; #ifdef RELSTATUS const char *release_status = RELSTATUS; #else const char *release_status = (char *)0; #endif const char *sccs_version = SCCSVERSION; /* If == 31, shell compatible with bash-3.1, == 32 with bash-3.2, and so on */ int shell_compatibility_level = 32; /* Functions for getting, setting, and displaying the shell version. */ /* Forward declarations so we don't have to include externs.h */ extern char *shell_version_string __P((void)); extern void show_shell_version __P((int)); /* Give version information about this shell. */ char * shell_version_string () { static char tt[32] = { '\0' }; if (tt[0] == '\0') { if (release_status) #if defined (HAVE_SNPRINTF) snprintf (tt, sizeof (tt), "%s.%d(%d)-%s", dist_version, patch_level, build_version, release_status); #else sprintf (tt, "%s.%d(%d)-%s", dist_version, patch_level, build_version, release_status); #endif else #if defined (HAVE_SNPRINTF) snprintf (tt, sizeof (tt), "%s.%d(%d)", dist_version, patch_level, build_version); #else sprintf (tt, "%s.%d(%d)", dist_version, patch_level, build_version); #endif } return tt; } void show_shell_version (extended) int extended; { printf ("GNU bash, version %s (%s)\n", shell_version_string (), MACHTYPE); if (extended) printf (_("Copyright (C) 2007 Free Software Foundation, Inc.\n")); }
http://opensource.apple.com//source/bash/bash-86.1/bash-3.2/version.c
CC-MAIN-2016-36
en
refinedweb
On Tue, May 24, 2005 at 09:55:31PM -0400, Jurij Smakov wrote: > Hi, > > Thanks to everyone who responded to the proposal on new uniform packaging > scheme. Below is the summary: > > Sven Luther wrote: > > >To be absolutely sure that there will be no namespace collision between > >this one and the flavour version, i would name it : > > > > kernel-headers-$(subarch)-$(version)-$(abiname) > > > >Since it is a variant of the above common header file, and unpack to > >.../kernel-headers-$(subarch)... > > > >The flavour packages of this subarch will then depend on this one, and > >add the appropriate symlinks. > > That sounds reasonable. If there are no objections, I'll look into > implementing it. Cool. > >So /lib/modules/$(version)-$(abiname)-$(flavour)/source is deprecated > >and will have to be removed ? > > Upon further investigation and discussion with kernel-package maintainer > it turned out that the postinst file included by make-kpkg into > kernel-images has (and had for quite a while) a piece, which is supposed > to remove the source link if it is dangling, i.e. points to a non-existent > directory. Unfortunately, there is a bug in this piece of code, due to > which it was never removed :-). I filed a bug (#309981) against > kernel-package, which will hopefully be fixed in the next upload. That is not the problem. The question is which of the two links we want to keep so that third-party modules will be automatically buildable ? Either build and source exist, but what is each one supposed to do ? I am not speaking kernel-package or debian -wise, but what is expected by third party driver writters from the mainline kernel ? > Bastian Blank wrote: > > >You speak about kernel but mean linux, do you? > > This is a good point: linux kernel is pretty much dominating at the > moment, however in the future hurd and netbsd kernels may become (are?) > available. I do not know the current status of these projects, so it is > unclear whether it is worth an effort to work on a single naming scheme > which would cover _all_ kernels. Anyone else thinks that kernel- prefix is > too generic and should be replaced by something like linux- ? It would not > be too hard to implement. I would definitively go for : linux-kernel-<version> -> source package linux-source-<version>-<abi> -> binary package containing the source (and linux-tree-<version>-<abi> and linux-patch-<version>-<abi>) linux-image -> binary package containing the kernel. (and linux-headers, ...) Notice that if we are going to use these headers only for building modules, we should maybe rename them to linux-build or linux-module-support, in order to avoid confusion. > >cobalt mips/mipsel? Please clearify. > > It looks like there might be a problem with cobalt mips/mipsel flavours > which I am not aware of. I asked Bastian what the problem is exactly, but > did not get a response so far. As I understand, they are two different > architectures, so the proposed packaging should work just fine. Indeed, i was under that impression too, but thiemo told me that the current mips/mipsel package use the same source for both, and simply builds the binary packages for those two arches, as such it is already a mini-common-package, and should work out just well. > >Why not use one package with the arch-specific and one with the other > >parts? > > This is in the context of a common kernel-headers package. As it was > mentioned by other people, the discussion of it is below. > > Christoph Hellwig wrote: > > >I think it's a very bad idea to have different source bases and if > >possible we should implement it in the packaging - that would encourage > >people to use the facility instead of fixing things up properly. And > >doing that is always possible. > > Basically, what Christoph is proposing is not to include the > arches/subarches which require unmerged extra patches into the common > packaging scheme. I think this is a little bit extreme. I don't think it > is fair to require the Debian kernel maintainers to "fix" their > architectures which do not build from a common source. Besides there is a > constant progress with this, as I know hppa patch is constantly shrinking, > for example. Any other opinions? Christoph is welcome to help fixing the apus or nubus patchset so it is integrated upstream. I think encouraging a commmon source code is good, but we need a transition phase. > >I think we should only have a single kernel-headers-$(version)-$(abiname) > >package. There's quite a bit of cross including of asm-<arch> headers, > >and having the full set available makes getting this right much easier. > >It's not a lot of space used anyway. > > Right, so this is a third "aye" in favour of the common kernel-headers > package for all arches (counting the one by Andres Salomon expressed in an > irc conversation). In principle it is possible to implement, however, as > it is quite significant deviation from the existing packaging scheme, > might take longer to implement. If this way is chosen, we'll probably need > the following packages: > > kernel-headers-$(version)-$(abiname) > Arch-independent package, containing all the headers including the > include/asm-* for all arches. > > kernel-scripts-$(version)-$(abiname) > Arch-dependent package, containing the contents of scripts/ directory > along with the binary files, built from source there. I don't think > that we can do without shipping the binaries, as rebuilding them on > the user's machine will require write access to /usr/src. Some > architectures also include the plain text, but arch-specific file > arch/$(ARCH)/kernel/asm-offsets.s, which should be included with the > headers. It must go into this package, along with other arch-dependent > files I am probably forgetting (are there any?). Ok, seems reasonable. kernel-headers-$(version)-$(abiname)-$(flavour) would depend on both of them. > The only unclear thing is how to handle the subarches, which potentially > modify the header files with their patches. Perhaps there should be a > possibility for subarch-specific patches, such as > > kernel-headers-$(subarch)-$(version)-$(abiname) > kernel-scripts-$(subarch)-$(version)-$(abiname) Yes, i would do this. These replace the normal one for all flavours of this arch/subarch. This is the best solution for flexibility, and if done right, would not be too much of a weight. > Any thoughts and ideas on this are welcome. As soon as the discussion is > settled, I'll try to write up the results in some more or less permanent > location. Cool. Friendly, Sven Luther
https://lists.debian.org/debian-kernel/2005/05/msg00516.html
CC-MAIN-2016-36
en
refinedweb
XIST 2.5 has been released! What is it? =========== XIST is an XML-based extensible HTML.5? ========================== * Specifying content models for elements has seen major enhancements. The boolean class attribute empty has been replaced by an object model whose checkvalid method will be called for validating the element content. * A new module ll.xist.sims has been added that provides a simple schema validation. Schema violations will be reported via Pythons warning framework. * All namespace modules have been updated to use sims information. The SVG module has been updated to SVG 1.1. The docbook module has been updated to DocBook 4.3. * It's possible to switch off validation during parsing and publishing. * Experimental support for Holger Krekel's XPython has been added. * Creating global attributes has been simplified. Passing an instance of ll.xist.xsc.Namespace.Attrs to an Element constructor now does the right thing: * ll.xist.xsc.CharRef now inherits from ll.xist.xsc.Text too, so you don't have to special case CharRefs any more. When publishing, CharRefs will be handled like Text nodes. * ll.xist.ns.meta.contenttype now has an attribute mimetype (defaulting to "text/html") for specifying the MIME type. * ll.xist.ns.htmlspecials.caps has been removed. * Registering elements in namespace classes has been rewritten to use a cache now. * Pretty printing has been changed: Whitespace will only be added now if there are no text nodes in element content. * Two mailing lists are now available: One for discussion about XIST and one for XIST announcements. For changes in older versions see: Where can I get it? =================== XIST can be downloaded from or Web pages are at ViewCVS access is available at For information about the mailing lists go to Bye, Walter Dörwald
https://mail.python.org/pipermail/xml-sig/2004-June/010325.html
CC-MAIN-2016-36
en
refinedweb
Asked by: Why ilmerge doesn't work if a dll file contains a xaml user control? - To reproduce the error, follows the step below.1) Create a WPF User Control Library.2) Create a default user control as follow<UserControl x:<Grid><Label Content="UserControl2.xaml"/></Grid></UserControl>3) use ilmerge to create a test library.ilmerge /lib /out:wpfTestLib.dll WpfUserControlLibrary1.dll4) Add the wpfTestLib.dll to the reference of another Wpf window application and add the UserControl2 custom control.<Window x:<Grid><c:UserControl2/></Grid></Window>5) You will get the following compiler error.Could not create an instance of type 'UserControl2'.I am using vs2008 and I have downloaded the latest version of ilmerge. Thus I wonder what went wrong? Question All replies - You have your xml namespace defined as: xmlns:c="clr-namespace:WpfControlLibrary1;assembly=tcTestLib" This is saying to pull it from the assembly "tcTestLib.dll". However, with ILMerge, you created the assembly "wpfTestLib.dll" You'll need to change the corresponding XAML declaration, or it will not find the appropriate control. Reed Copsey, Jr. - - Hi Reed,Thanks for your reply. I made a typing mistake. Even thought I change from tcTestLib to wpfTestLib. The error is still there. What seems very strange is that the intellisense of vs2008 xaml editor shows "UserControl2" right after I finished typing "c:" which means the intellisense knows where to find the custom user control. However I just can't get it to compile. - Hi Microsoft team,I am not trying to be difficult. The reason I want to find out whether is this is a bug is that I have a lot of xaml user controls which are all in separated project files. I simply want to find out if this is a bug or, may be, there is a simple solution to fix it.thanks - Sarah,I've come across this problem myself in recent days. From a brief bit of searching it seems ILMerge doesn't work flawlessly with WPF as it can't handle xaml resources.There are a few other applications suggested here: I don't know if you are still having a problem with this - I sort of am. I found that anything saved as a resource seems to appear in the new assembly under the oldassemblyname.g.resources. Take a look in reflector. Normally you'd have the following: <oldAssemblyName>.dll: \Resources\<oldAssemblyName>.g.resources after the merge you have: <newAssemblyName>.dll: \Resources\<oldAssemblyName>.g.resources I've been able to load the XAML resources out (resource dictionary) by using the following: Stream stream = assembly.GetManifestResourceStream(oldAssemblyName+".g.resources"); //Read the found resource //Find the correct key in the resource using (ResourceReader resourceReader = new ResourceReader(stream)) { foreach (DictionaryEntry de in resourceReader) { if (string.Compare((string)de.Key, ResourcePath, true) == 0) { rd = ( ResourceDictionary)XamlReader.Load((Stream)de.Value); break; } } } Where the ResourcePath is <oldAssemblyName>.g.resource. I do however still have 1 problem. I have a reference to a view model inside my resource dictionary, and this bombs out when loading the resource dictionary. I've added the schema as: [ assembly: XmlnsDefinition("", "ResourceSupplierDLL.ViewModels")] Now, it's the way it bombs out that is funny. If I put my ResourceSupplierDLL.DLL as the first assembly to merge in ILMerge - it works. But if it's anywhere but the first I get the exception public type TestViewModel cannot be found - even though after loading the assembly the type does exist (checked it out by calling assembly.GetTypes() on my just loaded assembly). Very odd problem indeed. Ben
https://social.msdn.microsoft.com/Forums/vstudio/en-US/013811c2-63a8-474d-9bdf-f2ab79f28099/why-ilmerge-doesnt-work-if-a-dll-file-contains-a-xaml-user-control?forum=netfxbcl
CC-MAIN-2016-36
en
refinedweb
On Fri, 2010-06-11 at 05:20 +0100, Ben Dooks wrote:>.Well, in this case the goal is to unify things, both within ARM andbetween architectures, so I fail to see Linus complaining about that :-)> What do people think of just changing everyone who is currently using> clk support to using this new implementation?It's risky I suppose... there isn't many users of struct clk in powerpcland today (I think one SoC platform only uses it upstream at themoment) so I won't mind getting moved over at once but on ARM, you haveto deal with a lot more cruft that might warrant a more progressivemigration approach... but I'll let you guys judge.> > struct clk {> > const struct clk_ops *ops;> > unsigned int enable_count;> > struct mutex mutex;> > I'm a little worried about the size of struct clk if all of them> have a mutex in them. If i'm right, this is 40 bytes of data each> clock has to take on board.> > Doing the following:> > find arch/arm -type f -name "*.c" | xargs grep -c -E "struct.*clk.*=.*{" | grep -v ":0" | awk 'BEGIN { count=0; > diff --git a/include/linux/clk.h b/include/linux/clk.h> > index 1d37f42..bb6957a 100644> > --- a/include/linux/clk.h> > +++ b/include/linux/clk.h> > @@ -3,6 +3,7 @@> > *> > * Copyright (C) 2004 ARM Limited.> > * Written by Deep Blue Solutions Limited.> > + * Copyright (c) 2010 Jeremy Kerr <jeremy.kerr@canonical.com>> > *> > * This program is free software; you can redistribute it and/or modify> > * it under the terms of the GNU General Public License version 2 as> > @@ -11,36 +12,125 @@> > #ifndef __LINUX_CLK_H> > #define __LINUX_CLK_H> > > > -struct device;> > +#include <linux/err.h>> > +#include <linux/mutex.h>> > > > -/*> > - * The base API.> > +#ifdef CONFIG_USE_COMMON_STRUCT_CLK> > +> > +/* If we're using the common struct clk, we define the base clk object here,> > + * which will be 'subclassed' by device-specific implementations. For example:> > + *> > + * struct clk_foo {> > + * struct clk;> > + * [device specific fields]> > + * };> > + *> > + * We define the common clock API through a set of static inlines that call the> > + * corresponding clk_operations. The API is exactly the same as that documented> > + * in the !CONFIG_USE_COMMON_STRUCT_CLK case.> > */> > > > +struct clk {> > + const struct clk_ops *ops;> > + unsigned int enable_count;> > + struct mutex mutex;> > +};> > how about defining a nice kerneldoc for this.> > > +#define INIT_CLK(name, o) \> > + { .ops = &o, .enable_count = 0, \> > + .mutex = __MUTEX_INITIALIZER(name.mutex) }> > how about doing the mutex initinitialisation at registration> time, will save a pile of non-zero code in the image to mess up> the compression.> > ~> +struct clk_ops {> > + int (*enable)(struct clk *);> > + void (*disable)(struct clk *);> > + unsigned long (*get_rate)(struct clk *);> > + void (*put)(struct clk *);> > + long (*round_rate)(struct clk *, unsigned long);> > + int (*set_rate)(struct clk *, unsigned long);> > + int (*set_parent)(struct clk *, struct clk *);> > + struct clk* (*get_parent)(struct clk *);> > should each clock carry a parent field and the this is returned by> the get parent call.~~> > > +};> > +> > +static inline int clk_enable(struct clk *clk)> > +{> > + int ret = 0;> > +> > + if (!clk->ops->enable)> > + return 0;> > +> > + mutex_lock(&clk->mutex);> > + if (!clk->enable_count)> > + ret = clk->ops->enable(clk);> > +> > + if (!ret)> > + clk->enable_count++;> > + mutex_unlock(&clk->mutex);> > +> > + return ret;> > +}> > So we're leaving the enable parent code now to each implementation?> > I think this is a really bad decision, it leaves so much open to bad> code repetition, as well as something the core should really be doing> if it had a parent clock field.> > ~> +static inline void clk_disable(struct clk *clk)> > +{> > + if (!clk->ops->enable)> > + return;> > so if we've no enable call we ignore disable too?> > also, we don't keep an enable count if this fields are in use,> could people rely on this being correct even if the clock has> no enable/disable fields.> > Would much rather see the enable_count being kept up-to-date> no matter what, given it may be watched by other parts of the> implementation, useful for debug info, and possibly useful if> later in the start sequence the clk_ops get changed to have this> field.~> > > +~ mutex_lock(&clk->mutex);> > +> > + if (!--clk->enable_count)> > + clk->ops->disable(clk);> > +> > + mutex_unlock(&clk->mutex);> > +}> > +> > +static inline unsigned long clk_get_rate(struct clk *clk)> > +{> > + if (clk->ops->get_rate)> > + return clk->ops->get_rate(clk);> > + return 0;> > +}> > +> > +static inline void clk_put(struct clk *clk)> > +{> > + if (clk->ops->put)> > + clk->ops->put(clk);> > +}> > I'm beginging to wonder if we don't just have a set of default ops> that get set into the clk+ops at registration time if these do> not have an implementation.> ~> > +static inline long clk_round_rate(struct clk *clk, unsigned long rate)> > +{> > + if (clk->ops->round_rate)> > + return clk->ops->round_rate(clk, rate);> > + return -ENOSYS;> > +}> > +> > +static inline int clk_set_rate(struct clk *clk, unsigned long rate)> >~ +{> > + if (clk->ops->set_rate)> > + return clk->ops->set_rate(clk, rate);> > + return -ENOSYS;> > +}> > +> > +static inline int clk_set_parent(struct clk *clk, struct clk *parent)> > +{> > + if (clk->ops->set_parent)> > + return clk->ops->set_parent(clk, parent);> > + return -ENOSYS;> > +}> > We have an interesting problem here which I belive should be dealt> with, what happens when the clock's parent is changed with respect> to the enable count of the parent.> > With the following instance:> > we have clocks a, b, c;> a and b are possible parents for c;> c starts off with a as parent> > then the driver comes along:> > 1) gets clocks a, b, c;> 2) clk_enable(c);> 3) clk_set_parent(c, b);> > now we have the following:> > A) clk a now has an enable count of non-zero> B) clk b may not be enabled> C) even though clk a may now be unused, it is still running> D) even though clk c was enabled, it isn't running since step3> > this means that either any driver that is using a multi-parent clock> has to deal with the proper enable/disable of the parents (this is> is going to lead to code repetition, and I bet people will get it> badly wrong).> > I belive the core of the clock code should deal with this, since> otherwise we end up with the situation of the same code being> repeated throughout the kernel.> > > +static inline struct clk *clk_get_parent(struct clk *clk)> > +{> > + if (clk->ops->get_parent)> > + return clk->ops->get_parent(clk);> > + return ERR_PTR(-ENOSYS);> > +}> >> > +#else /* !CONFIG_USE_COMMON_STRUCT_CLK */> > > > /*> > - * struct clk - an machine class defined object / cookie.> > + * Global clock object, actual structure is declared per-machine> > */> > struct clk;> > > > /**> > - *_enable - inform the system when the clock source should be running.> > * @clk: clock source> > *> > @@ -83,12 +173,6 @@ unsigned long clk_get_rate(struct clk *clk);> > */> > void clk_put(struct clk *clk);> > > > -> > -/*> > - * The remaining APIs are optional for machine class support.> > - */> > -> > -> > /**> > * clk_round_rate - adjust a rate to the exact rate a clock can provide> > * @clk: clock source> > @@ -125,6 +209,27 @@ int clk_set_parent(struct clk *clk, struct clk *parent);> > */> > struct clk *clk_get_parent(struct clk *clk);> > > > +#endif /* !CONFIG_USE_COMMON_STRUCT_CLK */> > +> > +struct device;> > +> > +/**> > + *_get_sys - get a clock based upon the device name> > * @dev_id: device name> > --> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in> > the body of a message to majordomo@vger.kernel.org> > More majordomo info at> > Please read the FAQ at>
http://lkml.org/lkml/2010/6/11/52
CC-MAIN-2016-36
en
refinedweb
LinkRewriterTransformer Summary Rewrites URIs in links to a value determined by an InputModule. The URI scheme identifies the InputModule to use, and the rest of the URI is used as the attribute name. Basic information Documentation Example For instance, if we had an XMLFileModule, configured to read values from an XML file: <site> <faq> <how_to_boil_eggs href="faq/eggs.html"/> </faq> </site> mapped to the prefix 'site:', then <link href="site:/site/faq/how_to_boil_eggs/@href"> would be replaced with <link href="faq/eggs.html"> InputModule Configuration InputModules are configured twice; first statically in cocoon.xconf, and then dynamically at runtime, with dynamic configuration (if any) taking precedence. Transformer allows you to pass a dynamic configuration to used InputModules as follows. First, a template Configuration is specified in the static <map:components> block of the sitemap within <input-module> tags: <map:transformer <link-attrs>href src</link-attrs> <schemes>site ext</schemes> <input-module <file src="cocoon://samples/link/linkmap" reloadable="true"/> </input-module> <input-module <input-module <file src="{src}" reloadable="true"/> </input-module> <prefix>/site/</prefix> <suffix>/@href</suffix> </input-module> </map:transformer> Here, we have first configured which attributes to examine, and which URL schemes to consider rewriting. In this example, <a href="site:index"> would be processed. See below for more configuration options. Then, we have established dynamic configuration templates for two modules, 'site' (an XMLFileModule and 'mapper' (A SimpleMappingMetaModule. All other InputModules will use their static configs. Note that, when configuring a meta InputModule like 'mapper', we need to also configure the 'inner' module (here, 'site') with a nested <input-module>. There is one further twist; to have really dynamic configuration, we need information available only when the transformer actually runs. This is why the above config was called a "template" configuration; it needs to be 'instantiated' and provided extra info, namely: - The {src} string will be replaced with the map:transform @src attribute value. - Any other {variables} will be replaced with map:parameter values <map:match <map:generate <map:transform <map:serialize </map:match>Which would cause the 'mapper' XMLFileModule to be configured with a different XML file, depending on the request. Similarly, we could use a dynamic prefix: <prefix>{prefix}</prefix> in the template config, and: <map:parameter in the map:transform A live example of LinkRewriterTransformer can be found in the Apache Forrest sitemap. Transformer Configuration The following configuration entries in map:transformer block are recognised: link-attrsSpace-separated list of attributes to consider links (to be transformed). The whole value of the attribute is considered link and transformed.link-attr0..n of these elements each specify an attribute containing link(s) (to be transformed) and optionally a regular expression to locate substring(s) of the attribute value considered link(s). Has two attributes: name(required) name of the attribute whose value contains link(s).pattern(optional) regular expression such that when matched against the attribute value, all parenthesized expressions (except number 0) will be considered links that should be transformed. If absent, the whole value of the attribute is considered to be a link, as if the attribute was included in 'link-attrs'. schemesSpace-separated list of URI schemes to explicitly include. If specified, all URIs with unlisted schemes will not be converted.exclude-schemesSpace-separated list of URI schemes to explicitly exclude. Defaults to 'http https ftp news mailto'.bad-link-strString to use for links with a correct InputModule prefix, but no value therein. Defaults to the original URI.namespace-uriThe namespace uri of elements whose attributes are considered for transformation. Defaults to the empty namespace (""). The attributes considered to contain links are a set of the attributes specified in 'link-attrs' element and all 'link-attr' elements. Each attribute should be specified only once either in 'link-attrs' or 'link-attr'; i.e. an attribute can have at most 1 regular expression associated with it. If neither 'link-attrs' nor 'link-attr' configuration is present, defaults to 'href'. Below is an example of regular expression usage that will transform links x1 and x2 in <action target="foo url(x1) bar url(x2)"/>: <map:transformer <link-attr <!-- additional configuration ... --> </map:transformer> When matched against the value of target attribute above, the parenthesized expressions are: $0 = url(x1) bar url(x2) $1 = x1 $2 = x2 Expression number 0 is always discarded by the transformer and the rest are considered links and re-written. If present, map:parameter's from the map:transform block override the corresponding configuration entries from map:transformer. As an exception, 'link-attr' parameters are not recognised; 'link-attrs' parameter overrides both 'link-attrs' and 'link-attr' configuration.
http://cocoon.apache.org/2.2/blocks/linkrewriter/1.0/1102_1_1.html
CC-MAIN-2016-36
en
refinedweb
Part 7: An Interlude, Deferred This continues the introduction started here. You can find an index to the entire series here. Callbacks and Their Consequences In Part 6 we came face-to-face with this fact: callbacks are a fundamental aspect of asynchronous programming with Twisted. Rather than just a way of interfacing with the reactor, callbacks will be woven into the structure of any Twisted program we write. So using Twisted, or any reactor-based asynchronous system, means organizing our code in a particular way, as a series of “callback chains” invoked by a reactor loop. Even an API as simple as our get_poetry function required callbacks, two of them in fact: one for normal results and one for errors. Since, as Twisted programmers, we’re going to have to make so much use of them, we should spend a little bit of time thinking about the best ways to use callbacks, and what sort of pitfalls we might encounter. Consider this piece of code that uses the Twisted version of get_poetry from client 3.1: ... def got_poem(poem): print poem reactor.stop() def poem_failed(err): print >>sys.stderr, 'poem download failed' print >>sys.stderr, 'I am terribly sorry' print >>sys.stderr, 'try again later?' reactor.stop() get_poetry(host, port, got_poem, poem_failed) reactor.run() The basic plan here is clear: - If we get the poem, print it out. - If we don’t get the poem, print out an Error Haiku. - In either case, end the program. The ‘synchronous analogue’ to the above code might look something like this: ... try: poem = get_poetry(host, port) # the synchronous version of get_poetry except Exception, err: print >>sys.stderr, 'poem download failed' print >>sys.stderr, 'I am terribly sorry' print >>sys.stderr, 'try again later?' sys.exit() else: print poem sys.exit() So the callback is like the else block and the errback is like the except. That means invoking the errback is the asynchronous analogue to raising an exception and invoking the callback corresponds to the normal program flow. What are some of the differences between the two versions? For one thing, in the synchronous version the Python interpreter will ensure that, as long as get_poetry raises any kind of exception at all, for any reason, the except block will run. If we trust the interpreter to run Python code correctly we can trust that error block to run at the right time. Contrast that with the asynchronous version: the poem_failed errback is invoked by our code, the clientConnectionFailed method of the PoetryClientFactory. We, not Python, are in charge of making sure the error code runs if something goes wrong. So we have to make sure to handle every possible error case by invoking the errback with a Failure object. Otherwise, our program will become “stuck” waiting for a callback that never comes. forget to “raise” our asynchronous exception (by calling the errback function in PoetryClientFactory), our program will just run forever, blissfully unaware that anything is amiss. Clearly, handling errors in an asynchronous program is important, and also somewhat tricky. You might say that handling errors in asynchronous code is actually more important than handling the normal case, as things can go wrong in far more ways than they can go right. Forgetting to handle the error case is a common mistake when programming with Twisted. Here’s another fact about the synchronous code above: either the else block runs exactly once, or the except block runs exactly once (assuming the synchronous version of get_poetry doesn’t enter an infinite loop). The Python interpreter won’t suddenly decide to run them both or, on a whim, run the else block twenty-seven times. And it would be basically impossible to program in Python if it did! But again, in the asynchronous case we are in charge of running the callback or the errback. Knowing us, we might make some mistakes. We could call both the callback and the errback, or invoke the callback twenty-seven times. That would be unfortunate for the users of get_poetry. Although the docstring doesn’t explicitly say so, it really goes without saying that, like the else and except blocks in a try/ except statement, either the callback will run exactly once or the errback will run exactly once, for each specific call to get_poetry. Either we get the poem or we don’t. Imagine trying to debug a program that makes three poetry requests and gets seven callback invocations and two errback invocations. Where would you even start? You’d probably end up writing your callbacks and errbacks to detect when they got invoked a second time for the same get_poetry call and throw an exception right back. Take that, get_poetry. One more observation: both versions have some duplicate code. The asynchronous version has two calls to reactor.stop and the synchronous version has two calls to sys.exit. We might refactor the synchronous version like this: ... try: poem = get_poetry(host, port) # the synchronous version of get_poetry except Exception, err: print >>sys.stderr, 'poem download failed' print >>sys.stderr, 'I am terribly sorry' print >>sys.stderr, 'try again later?' else: print poem sys.exit() Can we refactor the asynchronous version in a similar way? It’s not really clear that we can, since the callback and errback are two different functions. Do we have to go back to a single callback to make this possible? Ok, here are some of the insights we’ve discovered about programming with callbacks: - Calling errbacks is very important. Since errbacks take the place of exceptblocks, users need to be able to count on them. They aren’t an optional feature of our APIs. - Not invoking callbacks at the wrong time is just as important as calling them at the right time. For a typical use case, the callback and errback are mutually exclusive and invoked exactly once. - Refactoring common code might be harder when using callbacks. We’ll have more to say about callbacks in future Parts, but for now this is enough to see why Twisted might have an abstraction devoted to managing them. The Deferred Since callbacks are used so much in asynchronous programming, and since using them correctly can, as we have discovered, be a bit tricky, the Twisted developers created an abstraction called a Deferred to make programming with callbacks easier. The Deferred class is defined in twisted.internet.defer. The word “deferred” is either a verb or an adjective in everyday English, so it might sound a little strange used as a noun. Just know that, from now on, when I use the phrase “the deferred” or “a deferred”, I’m referring to an instance of the Deferred class. We’ll talk about why it is called Deferred in a future Part. It might help to mentally add the word “result” to each phrase, as in “the deferred result”. As we will eventually see, that’s really what it is. A deferred contains a pair of callback chains, one for normal results and one for errors. A newly-created deferred has two empty chains. We can populate the chains by adding callbacks and errbacks and then fire the deferred with either a normal result (here’s your poem!) or an exception (I couldn’t get the poem, and here’s why). Firing the deferred will invoke the appropriate callbacks or errbacks in the order they were added. Figure 12 illustrates a deferred instance with its callback/errback chains: Let’s try this out. Since deferreds don’t use the reactor, we can test them out without starting up the loop. You might have noticed a method on Deferred called setTimeout that does use the reactor. It is deprecated and will cease to exist in a future release. Pretend it’s not there and don’t use it. Our first example is in twisted-deferred/defer-1.py: from twisted.internet.defer import Deferred def got_poem(res): print 'Your poem is served:' print res def poem_failed(err): print 'No poetry for you.' d = Deferred() # add a callback/errback pair to the chain d.addCallbacks(got_poem, poem_failed) # fire the chain with a normal result d.callback('This poem is short.') print "Finished" This code makes a new deferred, adds a callback/errback pair with the addCallbacks method, and then fires the “normal result” chain with the callback method. Of course, it’s not much of a chain since it only has a single callback, but no matter. Run the code and it produces this output: Your poem is served: This poem is short. Finished That’s pretty simple. Here are some things to notice: - Just like the callback/errback pairs we used in client 3.1, the callbacks we add to this deferred each take one argument, either a normal result or an error result. It turns out that deferreds support callbacks and errbacks with multiple arguments, but they always have at least one, and the first argument is always either a normal result or an error result. - We add callbacks and errbacks to the deferred in pairs. - The callbackmethod fires the deferred with a normal result, the method’s only argument. - Looking at the order of the Ok, let’s push the other button. The example in twisted-deferred/defer-2.py fires the deferred’s errback chain: from twisted.internet.defer import Deferred from twisted.python.failure import Failure def got_poem(res): print 'Your poem is served:' print res def poem_failed(err): print 'No poetry for you.' d = Deferred() # add a callback/errback pair to the chain d.addCallbacks(got_poem, poem_failed) # fire the chain with an error result d.errback(Failure(Exception('I have failed.'))) print "Finished" And after running that script we get this output: No poetry for you. Finished So firing the errback chain is just a matter of calling the errback method instead of the callback method, and the method argument is the error result. And just as with callbacks, the errbacks are invoked immediately upon firing. In the previous example we are passing a Failure object to the errback method like we did in client 3.1. That’s just fine, but a deferred will turn ordinary Exceptions into Failures for us. We can see that with twisted-deferred/defer-3.py: from twisted.internet.defer import Deferred def got_poem(res): print 'Your poem is served:' print res def poem_failed(err): print err.__class__ print err print 'No poetry for you.' d = Deferred() # add a callback/errback pair to the chain d.addCallbacks(got_poem, poem_failed) # fire the chain with an error result d.errback(Exception('I have failed.')) Here we are passing a regular Exception to the errback method. In the errback, we are printing out the class and the error result itself. We get this output: twisted.python.failure.Failure [Failure instance: Traceback (failure with no frames): : I have failed. ] No poetry for you. This means when we use deferreds we can go back to working with ordinary Exceptions and the Failures will get created for us automatically. A deferred will guarantee that each errback is invoked with an actual Failure instance. We tried pressing the callback button and we tried pressing the errback button. Like any good engineer, you probably want to start pressing them over and over. To make the code shorter, we’ll use the same function for both the callback and the errback. Just remember they get different return values; one is a result and the other is a failure. Check out twisted-deferred/defer-4.py: from twisted.internet.defer import Deferred def out(s): print s d = Deferred() d.addCallbacks(out, out) d.callback('First result') d.callback('Second result') print 'Finished' Now we get this output: First result Traceback (most recent call last): ... twisted.internet.defer.AlreadyCalledError This is interesting! A deferred will not let us fire the normal result callbacks a second time. In fact, a deferred cannot be fired a second time no matter what, as demonstrated by these examples: - twisted-deferred/defer-4.py - twisted-deferred/defer-5.py - twisted-deferred/defer-6.py - twisted-deferred/defer-7.py Notice those final callback and errback methods are raising genuine Exceptions to let us know we’ve already fired that deferred. Deferreds help us avoid one of the pitfalls we identified with callback programming. When we use a deferred to manage our callbacks, we simply can’t make the mistake of calling both the callback and the errback, or invoking the callback twenty-seven times. We can try, but the deferred will raise an exception right back at us, instead of passing our mistake onto the callbacks themselves. Can deferreds help us to refactor asynchronous code? Consider the example in twisted-deferred/defer-8.py: import sys from twisted.internet.defer import Deferred def got_poem(poem): print poem from twisted.internet import reactor reactor.stop() def poem_failed(err): print >>sys.stderr, 'poem download failed' print >>sys.stderr, 'I am terribly sorry' print >>sys.stderr, 'try again later?' from twisted.internet import reactor reactor.stop() d = Deferred() d.addCallbacks(got_poem, poem_failed) from twisted.internet import reactor reactor.callWhenRunning(d.callback, 'Another short poem.') reactor.run() This is basically our original example above, with a little extra code to get the reactor going. Notice we are using callWhenRunning to fire the deferred after the reactor starts up. We’re taking advantage of the fact that callWhenRunning accepts additional positional- and keyword-arguments to pass to the callback when it is run. Many Twisted APIs that register callbacks follow this same convention, including the APIs to add callbacks to deferreds. Both the callback and the errback stop the reactor. Since deferreds support chains of callbacks and errbacks, we can refactor the common code into a second link in the chains, a technique illustrated in twisted-deferred/defer-9.py: import sys from twisted.internet.defer import Deferred def got_poem(poem): print poem def poem_failed(err): print >>sys.stderr, 'poem download failed' print >>sys.stderr, 'I am terribly sorry' print >>sys.stderr, 'try again later?' def poem_done(_): from twisted.internet import reactor reactor.stop() d = Deferred() d.addCallbacks(got_poem, poem_failed) d.addBoth(poem_done) from twisted.internet import reactor reactor.callWhenRunning(d.callback, 'Another short poem.') reactor.run() The addBoth method adds the same function to both the callback and errback chains. And we can refactor asynchronous code after all. Note: there is a subtlety in the way this deferred would actually execute its errback chain. We’ll discuss it in a future Part, but keep in mind there is more to learn about deferreds. Summary In this Part we analyzed callback programming and identified some potential problems. We also saw how the Deferred class can help us out: - We can’t ignore errbacks, they are required for any asynchronous API. Deferreds have support for errbacks built in. - Invoking callbacks multiple times will likely result in subtle, hard-to-debug problems. Deferreds can only be fired once, making them similar to the familiar semantics of try/ exceptstatements. - Programming with plain callbacks can make refactoring tricky. With deferreds, we can refactor by adding links to the chain and moving code from one link to another. We’re not done with the story of deferreds, there are more details of their rationale and behavior to explore. But we’ve got enough to start using them in our poetry client, so we’ll do that in Part 8. Suggested Exercises - The last example ignores the argument to poem_done. Print it out instead. Make got_poemreturn a value and see how that changes the argument to poem_done. - Modify the last two deferred examples to fire the errback chains. Make sure to fire the errback with an Exception. - Read the docstrings for the addCallbackand addErrbackmethods on Deferred. 28 thoughts on “An Interlude, Deferred” Hi! Minor nitpick: d.addCallbacks(lambda r: out(r), lambda e: out(e)) If the goal is to make it short, what’s wrong with: d.addCallbacks(out, out) Also, on recent Python 2.xen you can from __future__ import print_function, and turn that into: d.addCallbacks(print, print) Of course, you are right. I’m not sure what I was thinking there, though perhaps I meant to indicate, with the different argument names, that the callback and errback arguments receive different kinds of values. But I think your version is better, I’ll make that change with the next update, thank you. That’s a nice tip on the __future__ import, though I think I’ll leave it 2.x style. Yeah, I agree, the second one is definitely best. print, print is just too confusing for people reading 2.x code. More minor nitpicking: there’s a convenience method for blocks that conceptually match with “finally” clauses, much like this one: d.addBoth(out) Which, as the name suggests, adds the cb to both the cb and eb chains. Thanks very much for your posts, again lvh Yes, addBoth is very handy. But one thing I’ve learned teaching (I also teach Python to new programmers): don’t try to introduce too many new things at once 🙂 Hi, exercise 1 is very interesting. I assumed it was because the callbacks/errbacks were chained and looked at twisted.internet.defer.addCallbacks() to verify. I also tried chaining more by combining addBoth(), addCallback(), addErrback(), and addCallbacks(). Very nice introduction to Deferreds. Thank you. Thank you very much for the work and passion you put in this book! You write in the style i usually learn things – i need to know WHY is it like this, not just HOW to do things. don’t bother calling the errback function in PoetryClientFactory, our program will just run forever, blissfully unaware that anything is amiss. I think it’s worth modifying the paragraph to emphasize that it’s not asynchronous Python programming issue not catching the exception. It’s Twisted reactor’s way to work. At least that is what to came to my mind when i read: “That shows another difference between the synchronous and asynchronous versions.” Thanks Victor, glad you like the series. I’ll try making an little adjustment to that paragraph and you can tell me if it’s clearer. What do you think of the new version? I’m trying to emphasize that in synchronous code the Python interpreter will guarantee that an except: block will run when it’s supposed to, while in the asynchronous code we’ve been using it’s mostly up to us to make sure that happens. I got stuck on this sentence: “That shows another difference between the synchronous and asynchronous versions.” So i would just modify it like this: “That shows another difference between the synchronous (where unhandled exceptions are shown by Python interpreter) and asynchronous (where undhandled exceptions are just logged) versions.” But you decide – this just my opinion. Ah, I was actually trying to explain the case where no asynchronous exception happens at all — because we forgot to call the errback. Hi Dave, I have another question: Is there a way I can add a callback that will invoke a calllater? In pseudo code this is what I want to do: d = maybeDeferred(….) d.addCallback(method1) if condition is true: d.addCallback(callLater(delay, method2, return-from-method1)) Certainly, but in your pseudo-code you are calling callLater immediately, instead of in the callback. What you want is something like: Makes sense? Hi Dave. I didn’t quite get this. “Contrast that with the asynchronous version: the poem_failed errback is invoked by our code, the clientConnectionFailed method of the PoetryClientFactory. We, not Python, are in charge of making sure the error code runs if something goes wrong” But clientConnectionFailed is twisted code right and twisted will call it when something goes wrong. We only define what is inside this function. So how are we responsible for making sure the error code runs? Hello! So I am contrasting try/except, which is built into the Python runtime system, with the asynchronous errbackmechanism, which is provided by Twisted (which is just another Python program). Twisted has to add asynchronous error handling on top of Python itself. So in this case by ‘we’ I mean Twisted and your code together. Does that make sense? Hi Dave. Another question. How does deferred know which is a callback and which is an errback? d = Deferred() add a callback/errback pair to the chain d.addCallbacks(got_poem, poem_failed) fire the chain with a normal result d.callback(‘This poem is short.’) What if my errback was called got_poem? It’s the order of the arguments to addCallbacksthat matters. The first argument is the callback, the second argument is the errback. They could be called anything, it wouldn’t matter. Hi Dave. I modified defer-2.py as below. fire the chain with an error result #d.errback(Exception(‘I have failed.’)) d.callback(got_poem(‘PASS’)) d.errback(Exception(‘I have failed.’)) Im expecting to see only callback to work and invoking errback should throw an error. This is what I get: root@test:/twisted/twisted-intro/twisted-deferred# python defer-2.py Your poem is served: PASS Your poem is served: None Traceback (most recent call last): File “defer-2.py”, line 21, in d.errback(Exception(‘I have failed.’)) File “/usr/lib/python2.7/dist-packages/twisted/internet/defer.py”, line 423, in errback self._startRunCallbacks(fail) File “/usr/lib/python2.7/dist-packages/twisted/internet/defer.py”, line 483, in _startRunCallbacks raise AlreadyCalledError twisted.internet.defer.AlreadyCalledError Why does “Your poem is served” appear twice. Doesnt it mean callback fn “got_poem” was invoked twice? Even though callback and errback were added to the deferred. d.addCallbacks(got_poem, poem_failed) In this line: d.callback(got_poem(‘PASS’)) You are calling got_poemdirectly and then the deferred calls it again. Try this: d.callback('PASS') That worked. Thanks. hi Dave. Im not clear how deferred will call the errback. We don’t pass it while invoking callLater. reactor.callWhenRunning(d.callback, ‘Another short poem.’). I tried executing defer-9 example. I have the server running but I get the following error. root@test:/twisted/twisted-intro# python blocking-server/slowpoetry.py –port 10001 poetry/ecstasy.txt {‘delay’: 0.7, ‘iface’: ‘localhost’, ‘port’: 10001, ‘num_bytes’: 10} Serving poetry/ecstasy.txt on port 10001. /twisted/twisted-intro/twisted-deferred# python defer-9.py End of it Traceback (most recent call last): File “defer-9.py”, line 21, in d.addBoth(poem_done(‘End of it’)) File “defer-9.py”, line 16, in poem_done reactor.stop() File “/usr/lib/python2.7/dist-packages/twisted/internet/base.py”, line 580, in stop “Can’t stop reactor that isn’t running.”) twisted.internet.error.ReactorNotRunning: Can’t stop reactor that isn’t running. poem_done() is being called without fetching the poem. In defer-9.py the errback will never be called since the code fires the callback chain and no exceptions are raised by the callbacks.
http://krondo.com/an-interlude-deferred/
CC-MAIN-2016-36
en
refinedweb
JEP 286: Local-Variable Type Inference can towards = Path.of(fileName); var fileStream = new FileInputStream(path); var bytes = Files.readAllBytes(fileStream); type is inferred based on the type of the initializer. If there is no initializer, the initializer is the null literal, or the type of the initializer is not one that can be normalized to a suitable denotable type (these include intersection types and some capture types), or if the initializer is a poly expression that requires a target type (lambda, method ref, implicit array initializer), then the declaration is rejected. We may additionally consider val or let as a synonym for final var. (In any case, locals declared with var will continue to be eligible for effectively-final analysis.) The identifier var will not be made into a keyword; instead it will be a reserved type name. This means that code that uses var as a variable, method, or package name will not be affected; code that uses var as a class or interface name will be affected (but these names violate the naming conventions.) Excluding locals with no initializers eliminates "action at a distance" inference errors, and only excludes a small portion of locals in typical programs. Excluding RHS expressions whose type is not denotable would simplify the feature and reduce risk. However, excluding all non-denotable types is likely to be too strict; analysis of real codebases show that capture types (and to a lesser degree, anonymous class types) show up with some frequency. Anonymous class types are easily normalized to a denotable type. For example, for var runnable = new Runnable() { ... } we normalize the type of runnable to Runnable, even though inference produces the sharper (and non-denotable) type Foo$23. Similarly, for capture types Foo<CAP>, we can often normalize these to a wildcard type Foo<?>. These techniques dramatically reduce the number of cases where inference would otherwise fail. Alternatives We could continue to require manifest declaration of local variable types. We could support diamond on the LHS of an assignment; this would address a subset of the cases addressed by var. The design described above incorporates several decisions about scope, syntax, and non-denotable types; alternatives for those choices which were also considered are documented here. Scope Choices There are several other ways we could have scoped this feature. One, which we considered, was restricting the feature to effectively final locals ( val only). will inevitably) Whether or not to have a second form for immutable locals ( val, let) is a tradeoff of additional ceremony for additional capture of design intent. We already have effectively-immutable analysis for lambda and inner class capture, and the majority of local variables are already effectively immutable. Some people like that var and val are so similar, so that the difference recedes into the background when reading code, while others find them distractingly similar. Similarly, some like that var and let are clearly different, while others find the difference distracting. (If we are to support new forms, they should have equal syntactic weight (both val and let qualify), so that laziness is less likely to entice users to omit the additional declaration of immutability.) Auto is a viable choice, but Java developers are more likely to have experience with Javascript, C#, or Scala than they are with C++, so we do not gain much by emulating C++ here. Using const or final seems initially attractive because it doesn't involve new keywords. However, going in this direction effectively closes the door on ever doing inference for mutable locals. Using def has the same defect. The Go syntax (a different kind of assignment operator) seems pretty un-Javaish. Non-Denotable Types We have several choices as to what to do with nondenotable types (null types, anonymous class types, capture types, intersection types.) We could reject them (requiring a manifest type), accept them as inferred types, or try to "detune" them to denotable types. Arguments for rejecting them include: Risk reduction. There are many known corner cases with weird types such as captures and intersections in both the spec and the compiler; by allowing variables that have these types, they are more likely to be used, activate corner cases, and cause user frustration. (We are working on cleaning these up, but this is a longer-term activity.) Expressibility-preserving. By rejecting non-denotable types, every program with varhas a simple local transformation to a program without var. Arguments for accepting them include: We already infer these types in chained calls, so it is not like our programs are free of these types anyway, or that the compiler need not deal with them. Capture types arise in situations when you might think that a capture type is not needed (such as var x = m(), where m()returns Foo<?>); rejecting them may lead to user frustration. While we were initially drawn to the "reject them" approach, we found that there were a significant class of cases involving capture variables that users would ultimately find to be mystifying restrictions. For example, when inferring var c = Class.forName("com.foo.Bar") inference produces a capture type Class<CAP>, even though the type of this expression is "obviously" Class<?>. So we chose to pursue an "uncapture" strategy where capture variables could be converted to wildcards (this strategy has applications elsewhere as well). There are many situations where capture types would otherwise "pollute" the result, for which this technique was effective. Similarly, we normalize anonymous class types to their (first) supertype. We make no attempts to normalize intersection or union types. The largest remaining category where we cannot infer a sensible result is when the initializer is null. Risks and Assumptions Risk: Because Java already does significant type inference on the RHS (lambda formals, generic method type arguments, diamond), there is a risk that attempting to use var/val: Inferring non-denotable types might press on already-fragile paths in the specification and compiler. We've mitigated this by normalizing most non-denotable types, and rejecting the remainder. Risk: source incompatibilities (someone may have used "var" as a type name.) Mitigated with reserved type names; names like "var" and "val" do not conform to the naming conventions for types, and therefore are unlikely to be used as types. The names "var" and "val" are commonly used as identifiers; we continue to allow this. Risk: reduced readability, surprises when refactoring. Like any other language feature, local variable type inference can be used to write both clear and unclear code; ultimately the responsibility for writing clear code lies with the user.
http://openjdk.java.net/jeps/286
CC-MAIN-2016-36
en
refinedweb
/* * . */ // // observer - notification client for network events // #ifndef _H_OBSERVER #define _H_OBSERVER #include <Security/utilities.h> namespace Security { namespace Network { class Transfer; // // An Observer object has a set (bitmask) of events it is interested in. // Observers are registered with Transfer and Manager objects to take effect. // class Observer { public: virtual ~Observer(); public: enum { noEvents = 0x000000, // mask for no events transferStarting = 0x000001, // starting transfer operation transferComplete = 0x000002, // successfully finished transferFailed = 0x000004, // failed somehow connectEvent = 0x000800, // transport level connection done or failed protocolSend = 0x001000, // low-level protocol message sent protocolReceive = 0x002000, // low-level protocol message received //@@@ questionable resourceFound = 0x000008, // resource found, OK to continue downloading = 0x000010, // downloading in progress aborting = 0x000020, // abort in progress dataAvailable = 0x000040, // data ready to go systemEvent = 0x000080, // ??? percentEvent = 0x000100, // a >= 1% data move has occurred periodicEvent = 0x000200, // call every so often (.25 sec) propertyChangedEvent = 0x000400, resultCodeReady = 0x004000, // result code has been received by HTTP uploading = 0x008000, // uploading allEvents = 0xFFFFFFFF // mask for all events }; typedef uint32 Event, Events; void setEvents(Events mask) { mEventMask = mask; } Events getEvents() const { return mEventMask; } bool wants(Events events) const { return mEventMask & events; } virtual void observe(Events events, Transfer *xfer, const void *info = NULL) = 0; private: Events mEventMask; }; } // end namespace Network } // end namespace Security #endif _H_OBSERVER
http://opensource.apple.com//source/Security/Security-30.1/Network/observer.h
CC-MAIN-2016-36
en
refinedweb
jswartwood Packages by jswartwood - and1 Queues your asynchronous calls in the order they were made. - gitfiles Extract files from a git log (using --name-status. - linefeed Transforms a stream into newline separated chunks. - meta-rewrite-proxy Sets up a proxy to rewrite meta tags to bring pages within your app's domain. Useful to leverage existing meta data and url locations while creating new (namespaced) Facebook apps. - slow-proxy Sets up a proxy to forward requests with a specified delay on them. - svn-log-parser Parses SVN logs as into relevant JSON. Packages Starred by jswartwood - chownr like `chown -R` - jade Jade template engine - chmodr like `chmod -R` - mongoose Mongoose MongoDB ODM - minimatch a glob matcher in javascript - rimraf A deep deletion module for node (like `rm -rf`) - glob a little globber - mkdirp Recursively mkdir, like `mkdir -p` - lodash A utility library delivering consistency, customization, performance, & extras. - express Sinatra inspired web development framework - mocha simple, flexible, fun test framework - stylus Robust, expressive, and feature-rich CSS superset - mathjs Math.js is an extensive math library for JavaScript and Node.js. It features a flexible expression parser and offers an integrated solution to work with numbers, big numbers, complex numbers, units, and matrices. - nodemon Simple monitor script for use during development of a node.js app. - cssmin A simple CSS minifier that uses a port of YUICompressor in JS - socket.io Real-time apps made cross-browser & easy with a WebSocket-like API - highland The high-level streams library - event-stream construct pipes of streams of events - request Simplified HTTP request client. - optimist Light-weight option parsing with an argv hash. No optstrings attached. - and 7 more
https://www.npmjs.org/~jswartwood
CC-MAIN-2014-15
en
refinedweb
Facebook graph plugin Dependency : compile ":facebook-graph:0.14" Summary Installation grails install-plugin facebook-graph Description Facebook graph pluginThis plugin provides access to the Facebook Graph API and makes easier the development of a single sign-on using the Facebook Authentication proposal.Source code:. Collaborations are welcome :-) ConfigurationFirstly, you should create a new application in the Facebook developments page. At the end of this process you will get an application id, an application secret code and an API key. Then, go to your Config.groovyfile and add the application id and the application secret code: Running the grails application locally, you may see this error returned from Facebook: facebook.applicationSecret='<value>' facebook.applicationId='<value>' In a nuthshell, Facebook cannot complete the call back the url specified in the settings of your FB application config. As a workaround, you can set the domain up in your hosts file. See this thread for more information.Now, your application is ready to interact with Facebook using their Graph API. API Error Code: 100 API Error Description: Invalid parameter Error Message: next is not owned by the application. Single Sign-on with Facebook accountsFirst, read the section "Single Sign-on with the JavaScript SDK" in. It is a good explanation about what we want to get. Read it, please.The plugin provides a tag that should be included in all pages (the main template is a good place): You can set an optional locale object in this tag, for instance: <fbg:resources/> This tag adds the appropiate <fbg:resources <script>tag according to the request locale (or the locale set as attribute) and the call to FB.init function, needed to keep updated the Facebook cookie. The default options used in the FB.initcall are {status:true, cookie: true, xfbml: true}. You can provide specific values for these attributes, for instance: The code inserted by <fbg:resources fbg:resourcesby default is: If you prefer to use <div id="fb-root"></div> <script src=""></script> <script> FB.init({oauth:true, appId: 'your app id', cookie: true, xfbml: true, status: true}); </script> httpsinstead of httpin the facebook url to get the all.jsfile, set the configure property facebook.secureto truein your Config.groovy, with the applicationSecretand applicationId.Now you are ready to add your "Login with Facebook" button. First, add the fbnamespace in your <html>tag (again, the main layout is a good place): Then add the facebook login button where you want: <html xmlns="" xmlns: NOTE: Facebook has made a change for OAuth 2.0, which requires 'perms' parameter to be called 'scope'.Read this page to know more about permissions. The <fb:login-button <g:message </fb:login-button> facebookLoginfunction is the Javascript function that will be called when the Facebook login ends successfully. An example of this function (that you should provide): With this function, if the facebook login has success, the application will redirect the browser to the action <script type="text/javascript"> function facebookLogin() { FB.getLoginStatus(function(response) { if (response.status === 'connected') { // logged in and connected user, someone you know window.location ="${createLink(controller:'auth', action:'facebookLogin')}"; } }); } </script> /auth/facebookLogin. To end the process, in this action I usually recover an object of the domain class that represents an user in my application (eg: User.groovy). You could have a facebookUIDattribute in this Userclass and you will have in this point a value in session.facebook.uidwith the facebook UID of the authenticated user. Make a simple search in Userand you will get your User object associated with the Facebook user authenticated.If you don't locate any User with this facebookUIDit means that the Facebook authenticated user is new in your application, so you should create a new User object, associate the Facebook UID stored in session.facebook.uidand save it. Congrats, you have a new user from Facebook. The session.facebookmap is maintains by a filter added in the plugin. All will be right if you include the tag <fbg:resources/>in your pages FacebookGraphServiceThe plugin provides this service to facilitate the access to the Facebook Graph API. Inject it in your artifacts declaring this attribute: If you have the def facebookGraphService <fbg:resources/>in your pages and have a valid Facebook session, the filter added in the plugin will keep updated the map session.facebook, with information about your Facebook session. That is the unique requirement to use properly the FacebookGraphService. If you haven't got a valid session.facebookmap the methods of the service will return null. FacebookGraphService.getFacebookProfile()This method returns the Facebook profile information about the user associated with the map in session.facebook. The Graph URL invoked finally is the result is in JSON format. FacebookGraphService.getFriends()This method returns a list with all the friends of the user associated with the map in session.facebook. The Graph URL invoked finally is the result is in JSON format. FacebookGraphService.publishWall(message)This method publishes the message passed as parameter in the wall of the user associated with the map in session.facebook. The Graph URL where the post is made finally is parameter should be a String. FacebookGraphService.publishWall(map = [:])Since 0.7 version. This method publishes the map passed as parameter in the wall of the user associated with the map in session.facebook. The Graph URL where the post is made finally is expected parameter is a map, so you can provide more than the message. For instance: You can see the complete list of supported arguments in this Facebook documentation page. facebookGraphService.publishWall(message:"The message", link:"", name:"The name of the link") FacebookGraphService.getProfilePhotoSrc(facebookUID)This method generates the url of the public picture associated with the Facebook profile whose UID is equals to the passed parameter. It is a useful method to get the pictures of the Facebook friends of a user (first we call the getFriendsmethod and then we call the getProfilePhotoSrcmethod for each friend). FacebookGraphService.api(path, facebookData, params = [:], method = 'GET')This is the basic method to interact with Facebook Graph API. - path: The relative path that will be concated to URL to invoke the API method. See for more information about valid paths. - facebookData: The map stored in session.facebook. - params: if they are needed by the method to invoke. - method: GET (default) or POST (to publish content) Using in a TagCreate a tag in grails-app/taglib/ and then in your view you can just use the following to display Facebook information about the currently logged in user import grails.converters.JSON class FacebookTagLib { def facebookGraphService def fbInfo = { attrs -> if (session.facebook) { def myInfo = JSON.parse (facebookGraphService.getFacebookProfile().toString() ) out << "<br/>id" << myInfo.id out << "<br/>first_name:" << myInfo.first_name out << "<br/>Last Name:" << myInfo.last_name out << "<br/>Gender:" << myInfo.gender out << "<br/>Timezone:" << myInfo.timezone out << "<br/>Home Town:" << myInfo.hometown out << "<br/>Link:" << myInfo.link out << "<br/>Photo:" << "<img src='${facebookGraphService.getProfilePhotoSrc(myInfo.id);}'/>" } else { out << "Not logged in to Facebook" } }} <g:fbInfo/>
http://grails.org/plugin/facebook-graph
CC-MAIN-2014-15
en
refinedweb
Perl/Tips From FedoraProject This other build systems: - his opinion on DISPLAY environment variable. Currently the variable is unset when running locally or in Koji. If you want to run X11 tests, and you want it, you can do it using Xvfb X11 server implementation: %global use_x11_tests 1 %if %{use_x11_tests} # X11 tests: BuildRequires: xorg-x11-server-Xvfb BuildRequires: xorg-x11-xinit BuildRequires: font(:lang=en) %endif %check %if %{use_x11_tests} xvfb-run .
https://fedoraproject.org/w/index.php?title=Perl/Tips&oldid=339642
CC-MAIN-2014-15
en
refinedweb
29 March 2012 04:12 [Source: ICIS news] ARLINGTON (ICIS)--US-based Anellotech is the lowest-cost contender among current bio-paraxylene (PX) developers based on the latest public information available on bio-PX technologies, a ?xml:namespace> William Tittle, principal and director of strategy at US consulting firm Nexant said Tittle was speaking at the BioPlastek forum held at Arlington in Virginia on 28-30 March. Anellotech, which uses a fast-fluidized bed catalytic process to convert biomass to benzene, toluene and xylenes (BTX), has the lowest cost of production compared to current integrated naphtha-to-PX production cost and estimated bio-PX processing costs of Gevo and Virent technologies, Tittle said. “[However], Anellotech is still at a small demonstration scale stage and can only convert C6 sugars, while Virent and Gevo are already much farther along in terms of commercialization stage,” Tittle added. Virent’s advantage, according to Nexant, is in being able to convert both C5 and C6 sugars to aromatics via chemical catalytic conversion. Gevo followed a three-step process to convert fermentation-based isobutanol (IBA) to PX. “Gevo’s process is operating commercially but it still needs to demonstrate the ring closure process step to PX at the commercial level,” Tittle said. He added that the Netherlands-based Avantium, which is developing a process that produces furan dicarboxylic acid – an alternative to PX feedstock purified terephthalic acid (PTA). “Avantium has the challenges of any new molecule in an established application. “Avantium will need an extensive registration process under the US Environmental Protection Agency (EPA) over some concerns in furans,” according to Tittle. “The company will also need to demonstrate seamless integration of its polyethylene furanoate in the current one trillion dollar plastic fabrication infrastructure.” Avantium has developed polyethylene furanoate as an alternative to polyethylene terephthalate (PET). The components of polyethylene furanoate include bio-based ethylene glycol (EG) and furan dicarboxylic
http://www.icis.com/Articles/2012/03/29/9545758/us-based-anellotech-the-lowest-cost-bio-paraxylene-producer.html
CC-MAIN-2014-15
en
refinedweb
Friday Spotlight: Writing a Broker for Oracle Secure Global Desktop By TheSGDEngTeam-Oracle on Apr 11, 2014. IVirtualServerBroker is the key interface and all Secure Global Desktop brokers must implement it. These are the methods where a broker writer would typically add logic. A Sample Broker Let's look at a scenario where writing a broker could simplify operations. You are an administrator of a Secure Global Desktop deployment where users periodically need to access a key application. Only a single instance of the application can run on any one server and you have a limited number of licenses. When a user requires access to the application, they submit a service request to reserve one. The traditional, broker-less approach in Secure Global Desktop would be to create an application object, configure it to run on the reserved application server and then assign it to the user who reserved it. Time-consuming if you have to do this repeatedly and on a regular basis. However, if we can access the reservation database, we can do it dynamically in a broker. The operation now becomes: - Once only, the administrator assigns the application a dynamic application server configured with a custom broker Then, - User submits a service request and reserves a server - User logs into Secure Global Desktop and clicks the link to launch the application - The broker queries the database, gets the server that the user has reserved and launches the application on it For the administrator, there is no need to create, modify or destroy objects in the Secure Global Desktop datastore every time a user submits a service request. Skipping many details, defensive coding and exception handling, the broker code would look something like this: package com.mycompany.mypackage; import com.tarantella.tta.webservices.vsbim.*; import java.sql.*; import java.util.*; public class DbBroker implements IVirtualServerBroker { static private final String TYPE = "A Description"; private Connection dbConn; public void initialise(Map<String, String> parameters) throws VirtualServerBrokerException { // Connect to reservation database. End-point and credentials are // supplied in the parameters from the dynamic application server dbConn = DriverManager.getConnection(params.get("URL"), params.get("USER", params.get("PASS")); } public Map<String, List<ICandidateServer>> getCandidateServers( Map<String, String> parameters) throws VirtualServerBrokerAuthException, VirtualServerBrokerException { Map<String, List<ICandidateServer>> launchCandidates = new HashMap<String, List<ICandidateServer>>(); // Get the user identity String identity = parameters.get(SGD_IDENTITY); // Lookup the application server for that user from the database Statement statement = dbConn.createStatement(); String query = createQuery(identity); ResultSet results = statement.executeQuery(query); // Parse results; String appServerName = parseResults(results); if (appServerName != null) { // Create the assigned server. CandidateServer lc = new CandidateServer(appServerName); lc.setType(TYPE); List<ICandidateServer> lcList = new ArrayList<ICandidateServer>(); lcList.add(lc); launchCandidates.put(TYPE, lcList); } return launchCandidates; } public ICandidateServer prepareCandidate(String type, ICandidateServer candidate) throws VirtualServerBrokerException { // Nothing to do return candidate; } public void destroy() { // Close the connection to the database dbConn.close(); } // And the other methods public boolean isAuthenticationRequired() { // No user authentication needed return false; } public Scope getScope() { // Scope at the application level for all users. return Scope.APPLICATION; } } In summary, dynamic launch and custom brokers can simplify an administrator's life when operating in a dynamic environment. The broker can get its data from any source with a suitable interface: a database, a web server or VM providers with open APIs. Next time, we'll illustrate that with a broker connecting to Virtual Box.
https://blogs.oracle.com/virtualization/tags/virtualizaton
CC-MAIN-2014-15
en
refinedweb
Project Information python-nss is a Python binding for NSS (Network Security Services) and NSPR (Netscape Portable Runtime). NSS provides cryptography services supporting SSL, TLS, PKI, PKIX, X509, PKCS*, etc. NSS is an alternative to OpenSSL and used extensively by major software projects. NSS is FIPS-140 certified. NSS is built upon NSPR because NSPR provides an abstraction of common operating system services, particularly in the areas of networking and process management. Python also provides an abstraction of common operating system services but because NSS and NSPR are tightly bound python-nss exposes elements of NSPR. For information on NSS and NSPR, see the following: - Network Security Services (NSS). NSS project page. - Netscape Portable Runtime. NSPR project page. - NSPR Reference. NSPR API documentation. Design Goals NSS and NSPR are C language API's which python-nss "wraps" and exposes to Python programs. The design of python-nss follows these basic guiding principles: - Be a thin layer with almost a one-to-one mapping of NSS/NSPR calls to python methods and functions. Programmers already familiar with NSS/NSPR will be quite comfortable with python-nss. - Be "Pythonic". The term Pythonic means to follow accepted Python paradigms and idoms in the Python language and libraries. Thus when deciding if the NSS/NSPR API should be rigidly followed or a more Pythonic API provided the Pythonic implementation wins because Python programmers do not want to write C programs in Python, rather they want their Python code to feel like Python code with the richness of full Python. - Identifer names follow the preferred Python style instead of the style in the NSS/NSPR C header files. - Classes are camel-case. Class names always begin with a upper case letter and are then followed by a mix of lower and upper case letters, a upper case letter is used to separate words. Acronyms always appear as a contiguous string of upper case letters. - Method, function and property names are always lower case with words seperated by underscores. - Constants are all upper case with words seperated by underscores, they match the NSS/NSPR C API. - Every module, class, function, and method has associated documentation and is exposed via the standard Python methodology. This documentation is available via the numerous Python documentation extraction tools. Also see the generated HTML documentation provided with each release. The current release's documentation can be found here. - NSS/NSPR structs are exposed as Python objects. - NSS/NSPR functions which operate on a NSS/NSPR object (i.e. struct) become methods of that object. - NSS/NSPR objects which are collections support the Python iteration protocol. In other words they can be iterated over, indexed by position, or used as slices. - NSS/NSPR objects whose collection elements can be referenced by name support associative indexing. - NSS/NSPR objects which have "get" and "set" API function calls are exposed as Python properties. - All NSS/NSPR Python objects can print their current value by evaluting the Python object in a string context or by using the Python str() function. - Support threading. The Python Global Interpreter Lock (GIL) is released prior to calling NSS/NSPR C functions and reaquired after the NSS/NSPR C function returns. This allows other Python threads to execute during the time a NSS/NSPR function is progress in another thread. Also, any "global" values which are set in python-nss are actually thread-local. Examples of this are the various callbacks which can be set and their parameters. Thus each thread gets it own set of callbacks. - Many methods/functions provide sane default (keyword) parameters freeing the Python programmer from having to specify all parameters yet allowing them to be overriden when necessary. - Error codes are never returned from methods/functions. python-nss follows the existing Python exception mechanism. Any error reported by NSS/NSPR is converted into a Python exception and raised. The exact error code, error description, and often contextual error information will be present in the exception object. - Enumerated constants used in the NSS/NSPR API's are available in the Python module under the exact same name as they appear in the C header files of NSS/NSPR. - Convenience functions are provided to translate between the numeric value of an enumerated constant and it's string representation and visa versa. - python-nss internally supports UTF-8. Strings may be Python str objects or Python unicode objects. If a Python unicode object is passed to a NSS/NSPR function it will be encoded as UTF-8 first before being passed to NSS/NSPR. - python-nss tries to be flexible when generating a print representation of complex objects. For simplicity you can receive a block of formatted text but if you need more control, such as when building GUI elments you can access a list of "lines", each line is paired with an indentation level value. The (indent, text) pairs allow you to insert the item into a GUI tree structure or simply change the indentation formatting. - Deprecated elements of the python-nss API are marked with Python deprecation warnings as well as being documented in the nss module documentation. As of Python 2.7 deprecation warnings are no longer reported by default. It is suggested Python developers using python-nss periodically run their code with deprecation warnings enabled. Depercated elements will persist for a least two releases before being removed from the API entirely. Project History Red Hat utilizes both NSS and Python in many of it's projects, however it was not previously possible to call NSS directly from Python. To solve this problem Red Hat generously funded the initial development of python-nss as well as it's continued maintenance. Red Hat following it's open source philosophy has contributed the source to the Mozilla security project. Red Hat welcomes all interested contributors who would like to contribute the the python-nss project as part of an open source community. The initial release of python-nss occurred in September 2008 with it's inclusion in the Fedora distribution. The source code to python-nss was first imported into the Mozilla CVS repository on June 9th 2009. python-nss is currently available in: - Fedora - RHEL 6 The principal developer of python-nss is John Dennis jdennis@redhat.com. Additional contributors are: - Miloslav Trmač mitr@redhat.com The python-nss binding is still young despite having been utilized in several major software projects. Thus it's major version number is still at zero. This is primarily so the developers can make changes to the API as experiece grows with it. For example it is already known there are some naming inconsistencies. Elments of the API are probably not ideally partitioned into proper namespaces via Python modules. Some functionality and interface have already been deprecated due to lessons learned. Thus at some point in the future when it is felt the API has solidified and been further proven in the field a 1.0 release will be made. At that point in time existing users of the python-nss API will need to some elements of their code. A migration script will be provided to assist them. Licensing Information python-nss is available under the Mozilla Public License, the GNU General Public License, and the GNU Lesser General Public License. For information on downloading python-nss releases as tar files, see Source Download. Documentation python-nss API documentation The python-nss API documentation for the current release can be viewed at python-nss API documentation. The API documentation is generated from the python-nss source code and compiled modules. You can build it yourself via ./setup.py build_doc. Most distributions include the python-nss API documentation in the python-nss packaging. Consult your distribution for more information. Example Code The doc/examples directory contains numerous examples of python-nss programs and libraries you may wish to consult. They illustrate suggested usage and best practice. Test Code In addition the test directory contains unit tests that also illustrate python-nss usage, however unlike the examples the unit tests are geared towards testing rather than expository illustration. Other Documentation The doc directory contains other files you may wish to review. How to Report a Bug python-nss bugs are currently being tracked in the Red Hat bugzilla system for Fedora. You can enter a bug report here. Source Download Area Source downloads are maintained here. Links to download URL for a specific release can be found in the Release Information section. Mozilla Source Code Management (SCM) Information On March 21, 2013 the NSS project switched from using CVS as it's source code manager (SCM) to Mercurial, also known as hg. All prior CVS information (including release tags) were imported into the new Mercurial repositories, as such there is no need to utilize the deprecated CVS repositories, use Mercurial instead. To check out python-nss source code from Mercurial do this: hg clone The SCM tags for various python-nss releases can be found in the Release Information. You may want to review the Getting Mozilla Source Code Using Mercurial documentation for more information with working with Mercurial. The old deprecated CVS documentation can be found here: Getting Mozilla Source Code Using CVS. The old deprecated python-nss CVS source code location is mozilla/security/python/nss. Release Information Release 0.14.1 Release 0.14.0 Release 0.13.0 Release 0.12.0 Release 0.11.0 Release 0.10.0 Release 0.9.0 Release 0.8.0 Release 0.7.0 Release 0.6.0 Release 0.5.0 Release 0.4.0 Release 0.3.0 Release 0.2.0 Release 0.1.0
https://developer.mozilla.org/en-US/docs/Python_binding_for_NSS
CC-MAIN-2014-15
en
refinedweb
Opened 7 months ago Closed 7 months ago #8371 closed bug (invalid) ghci byte compiler + FFI crashes when used with embedded R Description The ghci interpreter destroys the C stack when initializing embedded R (the statistical software system available at). There is no problem using embedded R with ghc (compiled code). I have had no problems using ghci with other FFI projects, and this does not appear to be a linking problem (there are no undefined references). To reproduce the problem (under Fedora Linux using ghc 7.6.3) download the R source code, unpack, and (using haskellRtest.hs is attached): - cd R-3.0.2 - ./configure --enable-R-shlib - make - make install - cd <haskelltestdir> - ghci -L/usr/local/lib64/R/lib -lR haskellRtest.hs - Main> main Initialize R session... Error: C stack usage is too close to the limit Notes: - No computations are done, the failure happens during startup. - The C functions called are in <R source>/src/unix/Rembedded.c - The error message is issued from <R source>/src/main/errors.c - I tried increasing the system level C stack size limit but this didn't help. - As noted above, there are no problems when ghc is used. Attachments (1) Change History (9) Changed 7 months ago by dsamperi comment:1 Changed 7 months ago by rwbarton comment:2 Changed 7 months ago by dsamperi It works with the command line you provide here, but as I noted in the report there are no problems when ghc is used. When I try to use -threaded with ghci I get the warning "-debug, -threaded and -ticky are ignored by GHCi". comment:3 Changed 7 months ago by dsamperi This is just a stab in the dark, but while reviewing the docs on FFI I discovered that it is assumed ints can be used for 64-bit addressing, that is, the difference between two pointer values can be stored in an int. But under Windows this is not so (32-bit ints), and I think R also employs 32-bit ints. If this is the source of the problem then it should show up in both ghc and ghci, but it does not. Again, just a stab in the dark... comment:4 Changed 7 months ago by carter Have you tried testing to see if the problem also happens with ghc head ? Ghci prior to 7.7 has it's own custom linker and is the culprit behind many ghci specific bugs. comment:5 Changed 7 months ago by dsamperi Thanks for the suggestion. I tried to build from HEAD but it requires Happy 1.19, and only version 1.18 would install (due to a global constraint?). So I used the lazy method of downloading a pre-built binary for Linux from, specifically, I installed from ghc-7.7.20130720-x86_64-unknown-linux.tar.bz2. Unfortunately, this did not resolve the problem, as I still get the C stack usage Error. comment:6 Changed 7 months ago by rwbarton I can reproduce this without ghci, by putting a forkIO around the body of main (and adding a threadDelay in the main thread). It seems to just be an interaction between R's method for determining the base address of the stack and the way pthread allocates stacks for new threads. Try this C example program pt.c. #include <stdio.h> #include <unistd.h> #include <pthread.h> int Rf_initEmbeddedR(int argc, char **argv); void *foo(void *blah) { char *args[] = {"pt", "--gui=none", "--silent", "--vanilla"}; int r; setenv("R_HOME", "/usr/lib/R", 1); r = Rf_initEmbeddedR(sizeof(args)/sizeof(args[0]), args); printf("r = %d\n", r); } int main(void) { pthread_t tid; pthread_create(&tid, NULL, foo, NULL); while (1) sleep(1); return 0; } rwbarton@adjunction:/tmp$ gcc -o pt pt.c -lpthread -lR rwbarton@adjunction:/tmp$ ./pt Error: C stack usage is too close to the limit Error: C stack usage is too close to the limit r = 1 It would probably be best to just disable R's stack limit checks, if possible. comment:7 Changed 7 months ago by dsamperi I accidentally replied to you in gmail (sent to ghc-devs) instead of in this comment thread, sorry! Your comment combined with the suggestion that I use ghc-7.7 provides a work-around. I tried disabling R's stack limit checks, but this leads to a segfault. A correct workaround is to use the flag -fno-ghci-sandbox, as this prevents ghci from forking a thread (all computations are run in the main thread). As noted previously R is not thread-safe. This eliminates the "C stack usage" error messages, but there is still a segfault when ghc-7.6.3 is used (and a non-trivial computation is done). Using ghc-7.7 fixes this. Thanks! comment:8 Changed 7 months ago by rwbarton - Resolution set to invalid - Status changed from new to closed Great, glad you were able to get it working! Does it work when you compile with the threaded runtime, like ghci uses? (ghc -threaded haskellRtest.hs -L/usr/local/lib64/R/lib -lR should do it)
https://ghc.haskell.org/trac/ghc/ticket/8371
CC-MAIN-2014-15
en
refinedweb
On Sat, 03 Mar 2007 15:36:14 -0800, Paul Rubin wrote: > James Stroud <jstroud at mbi.ucla.edu> writes: >> for akey in dict1: >> if some_condition(akey): >> dict2[akey] = dict2.pop(akey) >> >> Which necessitates a key is a little cleaner than your latter example. > > Yeah, I also think removing keys from a dict while iterating over it > (like in Steven's examples) looks a bit dangerous dangerous. It is dangerous. That's why I didn't do it. I very carefully iterated over a list, not the dictionary, and in fact put in a comment explicitly saying that you can't iterate over the dictionary: for key in some_dict.keys(): # can't iterate over the dictionary directly! do_something_with(some_dict.pop(key)) If you try to iterate over the dictionary directly, you get a RuntimeError exception when the dictionary changes size. Unfortunately, the exception isn't raised until AFTER the dictionary has changed size. >>> D = {1:1, 2:2} >>> for key in D: ... D.pop(key) ... 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: dictionary changed size during iteration >>> D {2: 2} That's a gotcha to watch out for: the exception isn't raised until the damage is done. > Assuming you meant "dict1.pop" instead ot dict2.pop above, your > example might be written > > dict2 = dict((k, dict1.pop(k)) for k in dict1 if some_condition(k)) > > avoiding some namespace pollution etc. You get a RuntimeError exception when dict1 changes size. You know, if I were easily offended, I'd be offended that you accused _me_ of writing dangerous code when my code both worked and worked safely, while your code failed and did damage when it did so (dict1 irretrievably loses an item). *wink* -- Steven.
https://mail.python.org/pipermail/python-list/2007-March/458919.html
CC-MAIN-2014-15
en
refinedweb
index Tutorials Software Testing and competition, developers are using latest technology to deliver the high quality software to their clients. Web and the way business are done online and offline is evolving very fast. New software development and testing techniques Technology index page Technology related terms are explained here. Learn it by the tutorials and examples explained here EAI - Enterprise Application Integration EAI,Enterprise Application Integration,EAI Technology,EAI software technology organizations to understand the emergence of a type of software that eases... Information Technology (IT) has become very critical for successful... the amount of software development significantly. Unfortunately these packaged Software Development, Software Services Company technology which helps the developer to build dynamic web applications. We...Software Development Services Software development services in India from Rose India Technologies (P) Ltd. Rose India Technologies Pvt. Ltd. is software Tutorials - Java Server Pages Technology from a software developer. The JSP technology is blessed with a number... Tutorials - Java Server Pages Technology  ... (JSP) technology is the Java platform technology for delivering Best Open Source Software impatient can consult the index below. Open source' software...Best Open Source Software Best Open Source Open source software. Often... source software has been responsible for key functions of the Internet for many Cloud Computing Technology Cloud Computing Technology HI, What is Cloud Computing technology? Thanks Cloud Computing Cloud Computing is a recent technology... the service provider provides the customer with software, infrastructure, platform Java Programming: Chapter 9 Index with computers. And we've all heard stories about software glitches that cause... Chapter | Previous Chapter | Main Index technology technology can spread virus by phone number?...is it possible adding one mobile phone number to attack that mobile phone VoIP Software VoIP Software There are many software and tools available for the VoIP. You can use these hardware and software to make the full use of VoIP technology. Here are the list: VoIP Conference VoIP Accessories  VoIP Technology and that deliver this technology in unique ways through a variety of software and Web... VoIP Technology The VoIP Technology Voice over Internet Protocol Software Design Software Design The process of solving problems and planning for a software solution is known as software design. In this process, first the purpose Technology What is and FAQs in South Korea, the developer country of this technology.  ... Technology What is and FAQs  ... an XML-based language like EAI the other business software. It is used What is Android Technology? technology thanks to its status as for providing an open source platform has already... technology platforms backdated in the coming time. One of the most viable instances of the onslaught of Android technology over its competitors can be best observed Cloud Computing: An Overview is a technology based on Internet that provides the users with software...Cloud computing is a newer concept in the field of Information technology that can be said to be a revolution in the field of web-services. Cloud computing Open or Open Source Software software in Computer Terminology? In a computer terminology, open or open source software means any technology or a software that are freely available to download and use. Because such kinds of software's are not protected under The quick overview of JSF Technology ; The quick Overview of JSF Technology This section gives you an overview of Java Server Faces technology, which... problems. So their collective effort brought a new technology named Java Server Faces What is Software Project Management? of technology becoming the determinant of success most of the large or crucial software... for the developer to follow, etc. Other important aspects of a software development plan...These days software can as well be designated as the bloodline of our modern Careers in Software Testing Careers in Software Testing Searching loopholes could be paying job option. Are you aware of it? If finding flaws is your passion, the field of software... and find out the bugs if any. The range of software testing doesn’t ends Uses of Android Technology calls android technology a system of software stack that consists of various... one or two things about android technology which in recent years almost became... interface based on touch or even gesture. Uses of android technology not just Software Website Templates Rose India Technology offers various categories of Software website templates for the web developing company. We offer e-commerce software templates or shopping cart software templates suited to corporate world. Our Customized Software iPhone Development iPhone Development Apple Inc. in June 2007 had created a revolution... technology. Its' sleek and compact outlook is fascinating and its graphics.... Apple had slapped some restriction on iPhone's software, in which Apple's What is Index? What is Index? What is Index software software. how we can design software via programming. and how we can give the graphical degin. please you send me any example making software. i will wait..., I which language you want to make software? What type if software you have Ajax Software of the Web. Software Developer-Ajax Patterns AJAX holds a lot of promise for web... Backbase from other RIA vendors. Ajax Software Development Technology... Ajax Software   Database Technology Database Technology Data is the flesh of any application, which is accessed from the server... management, Quality Analysis, software testing, web administration and many other software into software. and how we give the graphical structure.suppos i want to make...++. please you give me guidness about making software. sir you tell me its... and interest.sir want to become software maker. sir i wish you understand my Fleet management software technologies Fleet management software technologies are coming handy to all the businessmen... in their day-to-day work. This technology helps the manger to maintain all the paperwork... of their investment back. This software are linked to the tracking technologies Software Questions and Answers Software Questions and Answers View Software Questions and Answers online Discuss Software... these questions to find the answers to your software development problems. At our software JSF Introduction - An Introduction to JSF Technology JSF Introduction - An Introduction to JSF Technology...; Java Server Faces or JSF for short is another new exciting technology... Introduction section introduces you with cool JSF technology.  Software Development In India,Software Development Outsourcing In India efficient and effective Software Developer team, which assures you the best... methodologies to provide quality software solutions. We use technology...Outsourcing Software and Application Development to India Software Development Benefits of Open Source Software For the last few years several open source software completely changed our regular interaction with information technology and corresponding applications. Thanks to the wide array of benefits of open source software today's Software Project Management Software Project Management This section contains details and key factors of Software Project Management. First you need to understand about management... to achieve certain objectives/ goals. What is Software Project Management ? Software Search index Software Questions and Answers Software Questions and Answers View Software Questions and Answers online Discuss Software development questions, ask your questions and get answers... browse through these questions to find the answers to your software development Drop Index Drop Index Drop Index is used to remove one or more indexes from the current database. Understand with Example The Tutorial illustrate an example from Drop Index Indian Software Development Company Rose India Technologies ? An Indian Software Development Company RoseIndia Technologies is an Indian software development company playing a lead role in providing Web based Software Development Solutions and Support E-commerce: The Business Idea remained a myth had e-commerce not flourished. Technology reaching its acme has... a revolution, which has given the retailers and also the customers the ultimate..., Financial Transaction services, Travel industry, E-commerce software, Shipping Technology Articles/FAQs Technology Articles/FAQs What is Technology? The Oxford Dictionary defines technology as the application of scientific knowledge..., and methods to find a solutions of a particular problem. The word technology GPS fleet tracking software technology is attached. Then with the help of tracking software it can be easily seen...GPS fleet tracking software has become a great companion of the owner... collected by GPS and other tracking technologies. Fleet tracking software How Technology Brought Change in Modern Management? place in the field of software technology, programming languages, mobile technology...In the recent years innovations in technology have brought change... as the advanced communicative technology brought change in the modern Software Maintenance Solutions, Application Maintenance and Support Services, Software Maintenance Outsourcing Services Rose India Software Maintenance Services With the ever-increasing technology... technology develop high quality software that is meant to perform specific... on them, software maintenance is becoming more and more crucial to maintain peak Java technology Java technology how does java technology implement security Mobile Software Development Solutions Mobile Software Development Rose India's Mobile Software... to the existing one. The adoption of next generation technology multiplies... develops mobile software solutions for Black Berry, Palm, Android clustered and a non-clustered index? clustered and a non-clustered index? What is the difference between clustered and a non-clustered index Fleet management system software evaluation Following article is about Fleet management system software evaluation. People... they are running. And fleet management system software is considered as the spine... off place it becomes difficult to keep a close eye. Here such software acts VoIP Billing Software VoIP Billing Software VoiceMaster VoIP Billing software VoiceMaster... billing software:- * Standard and Advanced VoIP Billing Functionality  Free GPS Software Free GPS Software GPS Aprs Information APRS is free GPS software for use with packet... to watch over the internet. Introduction of Free GPS Software Building Projects - Maven2 Source build tool that made the revolution in the area of building projects... machine, developer and architecture without any effort ? Non trivial: Maven.... Technology: Maven is a simple core concept that is activated through IoC Low Cost iPhone Development , it has created a revolution in the world of smart phone as it has some unique.... Some of the software and applications of this hot gadgets are also expensive... exceptional developers. We use latest art, technology, logics and designs What is VOIP (Voice Over Internet Protocol)? software installed. Later in 1998, with the development of technology... is one of such emerging technologies that has emerged as a revolution... technology that uses the high speed internet connection or broadband connection VoIP Management Software Software VoIP stands for Voice over Internet Protocol. This technology takes... VoIP Management Software VoIP Routing and VoIP Management Software Your Open Sourcing Java by Sun Microsystems ; Sun Microsystems brings a revolution in computer... president of Java developer products and programs, this shouldn’t be seen..., driving innovation and building community while promoting the software in effective array, index, string array, index, string how can i make dictionary using array...please help Java Technology Java Technology Hi, What is the use of Java technology? How I can develop a program for hotel booking? Thanks Hi, Java Technology is used to develop many different types of application for businesses. You can Apple iPad? The Ins and Outs of a Revolutionary Piece of Technology Apple iPad – The Ins and Outs of a Revolutionary Piece of Technology... for your iPad including a protective case and keyboard dock. The software The iPad has been said to retain all the benefits of technology that you Top 10 Open Source Software source software revolution. Way back in the beginning of nineties this Unix like...Until now open source software were the trendsetters in the modern day... and greatest success stories of software field belong to this category of open source Software Support Services, Application Support Solutions, Support and Maintenance Services and Solutions in software technology and IT strategies so that their organisation can meet...Software Support Services from Rose India Rose India is prominent software and IT services provider with years of experience in providing capable software Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import Mobile J2ME Application Development Services, J2ME Software Development Solutions directories. The revolution in global technology brings... from Rose India The success of any new technology depends upon... (Java 2 Platform, Micro Edition) technology is still comparatively new GPS Tracking Software GPS Tracking Software  ... special tracking software via Internet. The GPS tracking software is not only... software. How it works? GPS satellite sent signals of the fix position Software Application Development, Programming and Application Testing Solutions and Services Software Application Development Custom Application Development Outsourcing... from the outsourced software development project and the associated software... success in meeting out client's needs. Software Development Lifecycle
http://roseindia.net/tutorialhelp/comment/64674
CC-MAIN-2014-15
en
refinedweb
This tutorial provides a step-by-step introduction on how to integrate scripts as plugins into a C++/Qt/KDE application. This way, an application can be extended with plugins written in scripting languages such as Python, Ruby and KDE JavaScript. The following C++ code demonstrates how to execute scripting code in C++ and how to let scripting code deal with QObject instances. For this we define the MyObject class that implements some signals and slots we will access from within scripting code. Scripts are able to access such QObject's as there are classinstances, call slots as they are memberfunctions, get and set properties as there are membervariables and connect scripting functions with signals. // This is our QObject our scripting code will access class MyObject : public QObject { public: MyObject(QObject* parent) : QObject(parent) { // Create the QTimer instance we will us to emit // the update() signal with the defined interval. m_timer = new QTimer(this); // On timeout call the update() scripting function // if available. connect(m_timer, SIGNAL(timeout()), SIGNAL(update())); // Calls the init() scripting function if available. // We pass our used QTimer instance and an integer // as arguments. emit init(m_timer, 2000); // Normaly we would need to start the timer with // something like m_timer->start() but we leave that // job up to the script. } virtual ~MyObject() {} Q_SLOTS: // Return the timers interval in milliseconds int interval() const { return m_timer.interval(); } // Set the timers interval in milliseconds void setInterval(int ms) { m_timer.setInterval(ms); } Q_SIGNAL: // If emitted calls the init(timer,interval) scripting // function if available. void init(QTimer*, int); // If emitted calls the update() scripting function // if available. void update(); private: Q_Timer* m_timer; }; // Execute a script file. static void execScriptFile(MyObject* myobject, const QString& file) { // Create the script container. myobject is the parent QObject, // so that our action instance will be destroyed once the myobject // is destroyed. Kross::Action* action = new Kross::Action(myobject, file); // Publish our myobject instance and connect signals with // scripting functions. action->addObject( myobject, "myobject", Kross::ChildrenInterface::AutoConnectSignals); // Set the file we like to execute. action->setFile(file); // Execute the script. action->trigger(); } The execScriptFile function does create an instance of Kross::Action that is used as abstract container to deal with scripts / script files. We then add our myobject instance to the action. That way scripting code is able to access the publish QObject instance, call it slots and connect with the signals. Cause Kross::ChildrenInterface::AutoConnectSignals is defined, the init(QTimer*,int) and the update() signals the myobject instance provides will be automaticaly connected to matching scripting functions. Then the script file that should be executed is set. The used interpreter will be determinated by the file-extension like e.g. *.py for Python or or *.rb for Ruby. You are also able to set the interpreter explicit with e.g. action->setInterpreter("python") or action->setInterpreter("ruby"). You are also able to use action->setCode("print 'hello world'") to set the scripting code direct. Finally the script is executed. This is done by triggering the action. Once executed you are also able to use Kross::ErrorInterface to check if the action was executed successfully like demonstrate below. if( action->hadError() ) kDebug() << action->errorMessage() << endl; The Kross::Manager provides also the option to connect with the started and finished signals that got emitted if a script got executed. connect(&Kross::Manager::self(), SIGNAL( started(Kross::Action*) ), this, SLOT( started(Kross::Action*) )); connect(&Kross::Manager::self(), SIGNAL( finished(Kross::Action*) ), this, SLOT( finished(Kross::Action*) )); The Kross::ActionCollection class manages collections of Kross::Action instances, enables hierachies and implements serializing from/to XML. The following Python script demonstrates how a Python plugin looks like. The init() and the update() functions will be called if the matching signals at the myobject instance are emitted. First we import myobject module. This module provides us access to the MyObject QObject instance and it's slots, signals and properties. Within the init() Python function we set the interval to 2000 milliseconds = 2 seconds. Then we start the QTimer of our MyObject instance by calling it's start() slot. Then each 2 seconds the update() signal will be emitted what in turn calls our update() Python function. import() def update(): # just print the interval. print "interval=%i" % myobject.interval() The following Ruby script does the same as the Python scripting code above except by using the Ruby scripting language. This shows, that the same rich API functionality is accessible independend of the used scripting language. require () end def update() # just print the interval. puts "interval=%i" % Myobject.interval() end The following JavaScript script does the same as the Python and the Ruby scripting code above but uses Kjs+KjsEmbed (both included in kdelibs). So, the same rich API functionality is also accessible from within the JavaScript language. // this function got called if the init(QTimer*,int) // signal got emitted. function(); } // this function got called if the update() signal // got emitted. function update() { // just print the interval. println( "interval=" + myobject.interval() ); }
http://techbase.kde.org/index.php?title=Development/Tutorials/Kross/Scripts-as-Plugins&oldid=10848
CC-MAIN-2014-15
en
refinedweb
02 November 2012 11:23 [Source: ICIS news] SINGAPORE (ICIS)--Elevance Renewable Sciences and palm oil major Wilmar are on track to start up their joint venture biofuel and bio-olefin plant in Gresik, Indonesia, by the end of the year, an Elevance official said on Friday. “The plant, which will use palm oil as feedstock, will initially have a capacity of 180,000 tonnes/year which is expandable to 360,000 tonnes/year,” said Andy Corr, platform leader – consumer intermediates and ingredients, with Elevance, on the sidelines of the 2nd ICIS Asian Surfactants Conference. About 80% of the output from the facility will consist of biofuel, while the remaining 20% will be bio-olefin, a key ingredient for jet fuel. Elevance is also setting up a 300,000 tonne/year wholly-owned biofuel facility at ?xml:namespace> The In The two
http://www.icis.com/Articles/2012/11/02/9610224/elevance-wilmar-to-start-up-indonesia-biofuel-jv-by-year-end.html
CC-MAIN-2014-15
en
refinedweb
Reducing Crime Rates Raising The Education Of Prisoners Education Essay Is it possible to reduce crime rates by raising the education of prisoners? If so, would it be cost effective with respect to other crime prevention measures? The motivation for these questions is not limited to the obvious policy implications for crime prevention. Perhaps if we estimate the effect of education on criminal activity we may be able to shed some light on the magnitude of the social return to education. Crime is a negative activity with enormous social costs. Given the large social costs of crime, even small reductions in crime associated with education may be economically important. Researchers (Lochner and Moretti, 2003) have done some investigation to show how education generates benefits beyond the private returns received by individuals. Some other researchers (for example, Acemoglu and Angrist, 2000) and Moretti, 2002, 2003) have also investigated how education generates benefits beyond the private returns received by individuals. Yet, little research has been undertaken to evaluate the importance of other types of external benefits of education, such as its potential effects on crime. In a study carried out in the United States by Huang, Liang and Wang (2003), they noted in their findings that the changes in the U.S. crime rate were co-existent with two significant developments in the U.S. labor market: the sharp decrease in the earnings of young unskilled men in the 1980s and the rapid decline in the aggregate rate of unemployment in the 1990s. This immediately raised the question of whether or not the two events were related in some way. Of course, such a connection might be expected to hold on a priori [1] grounds. The reason was that according to microeconomics, the decision of whether or not to engage in criminal activities was a time allocation problem. As such, changes in the opportunities available to workers in the formal labor market had a direct impact upon the crime rate, by affecting the opportunity cost of criminal behavior. Yet, despite this compelling logic, few theoretical models have been constructed to date that can be used to address these connections more formally. This is particularly surprising, in view of the large body of empirical work that indicates that not only labor-market opportunities affect criminal behavior, but the crime rate itself also affects labor-market opportunities [2] . Perhaps the most robust finding in this literature is the documented negative correlation between market wages and crime. While Grogger (1998) estimated that a 20% decline in the (youth) wage lead to a 20% increase in the crime rate, Gould et al. (2002) documented that changes in the wage accounted for up to 50% of the trend in violent crimes and in property crimes. Studies by Raphael and Winter-Ebmer (2001) and by Gould et al. (2002) also indicated that there was a strong positive link between the unemployment and crime rates [3] . Close empirical relationships were also observed between the rate of crime and human capital acquisition. Indeed, several studies including Witte and Tauchen (1994), Lochner (1999) and Lochner and Moretti (2001) also indicated that achieving a high school education significantly reduces criminal behavior. Ripley (1993) also believed that recidivism rates drop when the education programs are designed to help prisoners with their social skills, artistic development and techniques and strategies to help them deal with their emotions. Ripley further stressed the importance of teaching moral education as well as critical thinking and problem solving skills. He believed that the works of Harold Herber (1970) and Benjamin Bloom (1956) have fostered the importance of teaching critical thinking and reasoning skills to all learners, especially those that are considered to be at risk. Gerber and Fritsch (1993) evaluated the outcomes of the adult education programs in prison. They distinguished among academic, vocational and social education and concluded that prison education programs lead to a reduction of criminal behavior, continued education after release from prison and fewer disciplinary problems in the prison setting. In addition, inmates who choose to participate in these programs have lower recidivism rates than those who do not participate. Despite the many reasons to expect a causal link between education and crime, empirical research is not conclusive. It is therefore important for me to cite, in this first section of this review, supporting literature that makes a case for education in prison. With this in mind, I will include some research that states the benefits of education in prison; support the idea that meeting the learning needs of young adults (in prison) is dependent on appropriate learning strategies in a specially designed curriculum (to encourage learning new skills); and that young adults in prison who participate in the curriculum (training activities) will be motivated to learn and behavior change will be facilitated. Education in prison Reduces violence Noguera (1996) in his paper Reducing and Preventing Youth Violence: An Analysis of Causes and an Assessment of Successful Programs, stated that for many youth, the experience of serving time in a large detention center may actually increase the likelihood that they will commit violent crimes again in the future. From his research, Vacca, (2004) stressed the point that the right kind of educational program leads to less violence by inmates involved in the programs and a more positive prison environment. Granoff (2005) agreed that educational programs have been met with enthusiasm by the inmates themselves and have shown a proven means to reduce instances of violence within prisons. Reduces recidivism Brewster and Sharp (2002) have found that academic education within prisons is more effective at reducing recidivism than many other types of programs, such as work programs and vocational education. Even more specifically, research by Harer, (1995) and Brewster & Sharp, (2002) indicated that programs that stressed achieving educational outcomes, such as General Educational Development (GED) [4] attainment, rather than just the process of education are more successful. In addition, other researchers (Nuttall, Hollmen, & Staley, 2003; Steurer & Smith, 2003; Brewster & Sharp, 2002; Fabelo, 2002; Norris, Snyder, Riem, & Montaldi, 1996; Harer, 1995) have stated that those inmates who successfully attained a General Education Diploma gained a better chance of not recidivating because achieving that goal put them on an equal level with other high school graduates when searching for a job Lowers crime rates Machin, Marie and Vujic (2010) stated in their research that there are a number of theoretical reasons why education may have an effect on crime. From the existing socio-economic literature there are (at least) three main channels through which education might affect criminal participation: income effects, time availability, and patience or risk aversion. For most crimes, one would expect that these factors induce a negative effect of education on crime. For the case of income effects, education increases the returns to legitimate work, raising the opportunity costs of illegal behaviour. Time spent in education may also be important in terms of limiting the time available for participating in criminal activity. Education may also influence crime through its effect on patience and risk aversion. Here, future returns from any activity are discounted according to one’s patience in waiting for them. Affects decisions to engage in crime Researchers (Usher, 1997; Lochner, 1999; Lochner and Moretti, 2001) have emphasized the role of education as an important determinant of crime. Education has a multiple role in deterring crime: it raises skills and abilities and then improves labour market perspectives thus implying a higher opportunity cost of crime and it has a non-market effect that affects the preferences of individuals. Becker (1968) stressed in his research that an increase in law-abidingness due to education would reduce the incentive to enter illegal activities and thus reduce the number of offenses. Studies conducted by Freeman (1991, 1996), Grogger (1995, 1998) and Lochner and Moretti (2001) attempted to clearly identify the relationships between crime and education. Most of the contributions on the effects of education on crime stressed how education raises individuals’ skills and abilities, thus increases the returns to legitimate work, raising the opportunity costs of illegal behaviour. But there exist benefits from education that are not taken in account by individuals, this implies that the social return of education is higher than its private return (Lochner and Moretti, 2001). Education has a non-market effect that affects the preferences of individuals. This effect (“civilization effect”) makes criminal decision more costly in psychological terms. Lochner (1999) uses a two-period model to look at some simple dynamic relations between education, work and crime. In his paper, he emphasized the role of human capital accumulation on criminal behavior and the results confirm that graduating from high school directly lowered the tendency to participate in criminal activities. In a successive joint research, the results obtained allowed Lochner and Moretti (2001) to conclude that education significantly reduces criminal activity. Provides a different outlook on life Education may also alleviate the harsh conditions of confinement or “pains of imprisonment” and reduce prisonization, the negative attitudes that are sometimes associated with incarceration. Deprivations of prison or imported criminogenic norms lead to prisonized subculture norms that favor attitudes hostile toward the institution and supportive of criminal activities. By providing safe niches and a more normalized environment, education may provide a basis for reconstruction of law-abiding lifestyles upon release from prison (Harer, 1995). In the foregoing pages, emphasis was placed on the importance of educating inmates because it is fundamental to the rehabilitation and correction of offending behavior and criminal behavior. Miles D. Harer (1995) went a little deeper into his research to explore the theory that correctional education has a normalizing effect on offenders and brief information on his findings are shared below. Harer’s Study: Education's Impact on Offending and criminal behavior In 1995 Miles D. Harer conducted a study Prison Education Program Participation and Recidivism: A Test of the Normalization Hypothesis, which explored the theory that correctional education programs have a "normalizing" effect on offenders, that increases prison safety, reduces recidivism, nurtures pro-social norms, and negates the effects of "prisonization”. [5] In his study he found that education programs served to occupy the inmate's time productively, thus limiting the negative influence of prisonization, and further served to socialize and resocialize inmates toward acceptance of prosocial norms. In other words, it was not that specific diploma or certificate programs reduced recidivism, but it was the normalization process that took place in the classroom that was in part responsible for reducing recidivism. His study found that there was approximately a 15 percent greater rate of recidivism among offenders with no educational participation than offenders who participated in [merely] .5 courses during each six-month period of their incarceration. The study further found that the greater the rate of participation the lower recidivism rates dropped. He used these findings to underscore his point regarding the potency of educational experiences in helping inmates to develop the pro-social norms required for successfully reentry into society. A concrete example of Harer's theory might look something like this: An offender from an urban area participates in a horticulture program. In the horticulture program he learns core abilities (that is, thinking creatively and critically, working productively, communicating clearly, working cooperatively, acting responsibly), from his instructor he sees, learns and models pro-social behaviors, and mature coping skills. By participating effectively in the program's requirements he learns the all-important skill of working in compliance with a system. Upon release, he returns to his urban community. Jobs in horticulture may be scarce in this area, but the transferable skills he obtained by participating in an educational program (of his own selection and that held his interest) would serve to make him a more valuable employee than someone who failed (while incarcerated) to obtain this collection of mature coping skills. This scenario is a key point in understanding correctional education. It is more important to provide a wide enough variety of programming so the offender has an educational opportunity that has personal validity and provides the offender with a self-identified satisfying outcome. This study makes a case for education in prison because the research conducted has indicated that it reduces prisonization and nurtures prosocial norms, a process which prepares offenders to be a ‘citizen’ of society and for a particular community when they leave prison. In order for education to be meaningful and effective, the learning needs of inmates must be considered. A framework for addressing these needs must be developed with learning strategies that will be effective and will encourage the learning of new skills. We will explore some possible ways in which this could be achieved. Meeting the learning needs of young adults (in prison) is dependent on appropriate learning strategies in a specially designed curriculum (to encourage learning new skills) Conspicuously absent from the research literature in the area of education is a discussion of a theoretical explanation for the connection between education and offending. Also, what I have not been able to find in the literature is a framework of learning strategies or curriculum that have been tried and have been proven effective. Despite this shortcoming, there are many possible ways education (that meets the learning needs of inmates) may encourage the learning of new skills. Some of these possible ways may include the improvement of cognitive skills, the use of appropriate learning strategies and a specially designed curriculum to meet learning needs. In this second section of the review, I will look at the contributions that some researchers made in these areas, including Bloom’s taxonomy. I will also draw upon the work of Malcolm Knowles’ for appropriate adult learning strategies, Jerome Bruner’s work that addresses learning needs relevant to curriculum, Gagne’s conditions of learning and its implications for instruction and since prison is a “community”, I will include the work of Jean Lave and Etienne Wenger’s communities of practice. Making a case for Improvement of cognitive skills One mechanism by which education will theoretically affect recidivism is through improvement of inmate cognitive skills. The way individuals think influences whether or not they violate the law (Andrews & Bonta; MacKenzie, 2006). Deficiencies in social cognition (understanding other people and social interactions), problem solving abilities, and the sense of self-efficacy are all cognitive deficits or “criminogenic needs” found to be associated with criminal activity (Foglia, 2000; Andrews, Zinger, Hoge, Bonta, Gendreau & Cullen, 1990; MacKenzie, 2006). The contribution of Bloom’s taxonomy in improving cognitive skills As we see from the research conducted, emphasizing the need for the improvement of cognitive skills in inmate education is very important. But before these researchers – and many others during their time - there was someone who advocated for a holistic form of education which included cognitive development. This person was no other than Benjamin Bloom (1956) who developed Bloom’s Taxonomy with one of the goals being to motivate educators to focus on all three “domains” of the educational objectives. These “domains” were listed as cognitive, affective and psychomotor. For this review I will briefly summarize the three domains to highlight the skills inherent in each one. Skills in the cognitive domain revolve around knowledge, comprehension, and critical thinking of a particular topic. Traditional education tends to emphasize the skills in this domain, particularly the lower-order objectives. There are six levels in the taxonomy, moving through the lowest order processes to the highest. Skills in the affective domain describe the way people react emotionally and their ability to feel another living thing's pain or joy. Affective objectives typically target the awareness and growth in attitudes, emotion, and feelings. There are five levels in the affective domain moving from the lowest order processes to the highest. Skills in the psychomotor domain describe the ability to physically manipulate a tool or instrument like a hand or a hammer. Psychomotor objectives usually focus on change and/or development in behavior and/or skills. Even though Bloom and his colleagues never created subcategories for skills in the psychomotor domain, other educators have created their own psychomotor taxonomies [6] that help to explain the behavior of typical learners or high performance athletes. For the curriculum I am hoping to develop (based on the learning needs of inmates), Bloom’s educational objectives (and more specifically its sub-categories) will play a very important role in determining the differentiation of curriculum topics and their application. Other research about the importance of cognitive skills Other research examining inmate cognitive skills demonstrate a connection between executive cognitive functioning (ECF) and antisocial behavior. According to Giancola (2000) ECF is defined as the cognitive functioning required in planning, initiation, and regulation of goal directed behavior. It would include such abilities as attention control, strategic goal planning, abstract reasoning, cognitive flexibility, hypothesis generation, temporal response sequencing, and the ability to use information in working memory. From this perspective, education may be important in reducing crime because it improves the ability to use and process information. Some researchers and educators (Batiuk, Moke and Rountree, 1998; Duguid, 1981; Gordon & Arbuthnot, 1987) argue that the importance of education and cognitive skills may be in its ability to increase individuals’ maturity or moral development. For example, academic instruction can help instill ideas about right and wrong, and these ideas may be associated with changes in attitudes and behaviors. Taking all of these suggestions into consideration, improving inmates’ cognitive skills will require strategies that are suitable for teaching adults. With this in mind, I will seek some guidance from Malcolm Knowles’ theory of adult learning strategies. Meeting adult learning needs through adult learning strategies. Malcolm Knowles (1973) was among the first to propose the theory of the andragogical model and in this approach he used the term “andragogy” to label his attempt to create a unified theory of adult learning. In 1984, he expanded his four assumptions to six assumptions of learning (theory and process of andragogy) to better explain the needs of the adult learner. In order to develop a framework to meet the basic learning needs of young adults in prison through adult learning strategies, I will explore the theory of Malcolm Knowles’ andragogical model to be guided by the suggested strategies. Knowles’ theory of the andragogical model proposes that (a) the adult learner is self-directed, that is, the adult learner makes his/her own decisions and is responsible for his/her own actions; (b) the adult learner has had numerous experiences based on the variety and scope of the adult learner’s life roles, so the adult learner has the experiential foundation on which to base his/her learning; (c) the adult learner is ready to learn, so the adult learner seek answers to what he/she specifically need to know; (d) the adult learner is oriented to learning and this orientation may be task-based with a life-centered or problem-centered component to learning; (e) the adult learner is motivated to learn and this motivation may stem from internal forces that cause the learner to gain self-confidence, recognition, improved self-esteem, and a better quality of life; (f) the adult learner need to be responsible for his/her decisions on education, that is, involvement in the planning and evaluation of their instruction. With these adult characteristics in mind, the resulting curriculum design will need to be process-based rather than content–based. This process design will allow the instructor to act as a facilitator who would link numerous resources with the adult learner. Following this line of thought, I will now turn to Jerome Bruner to draw upon his ideas to link the learning needs of inmates to a curriculum relevant to their needs. Linking learning needs to relevant curriculum Jerome Bruner (1959, 1960) a true instructional designer, suggested that a learner (even of a very young age) is capable of learning any material so long as the instruction is organized appropriately, in sharp contrast to the beliefs of Piaget and other stage theorists. Like Bloom's Taxonomy, Bruner (1960) suggested a system of coding in which people form a hierarchical arrangement of related categories. Each successively higher level of categories becomes more specific, echoing Benjamin Bloom's (1956) understanding of knowledge acquisition as well as the related idea of instructional scaffolding. In accordance with this understanding of learning, Bruner proposed the spiral curriculum, a teaching approach in which each subject or skill area is revisited at intervals, at a more sophisticated level each time. He advocated that these fundamental ideas, once identified, should be constantly revisited and reexamined so that understanding deepens over time. This notion of revisiting and reexamining fundamental ideas over time is what has become known as a “spiral curriculum.” As time goes by, students return again and again to the basic concepts, building on them, making them more complex, and understanding them more fully. Bruner (1960) recommended the cognitive-development approach to curriculum design because he suggested that learning and cognitive development are complex events in which the learner may be engaging in any of several activities. Included are interacting with others, manipulating objects, using signs and symbols, constructing mental models and observing and noting the actions and reactions of others. To be effective in the cognitive development approach, we would have to look at the learning objectives and see how the different learning objectives relate to the appropriate instructional design. In order to achieve this, I will now turn to Robert Gagne’s conditions of learning and implications for instruction. Conditions of learning and Implications for Instruction Gagne's (1965) theory of conditions of learning has several implications for instructional technology. The design of instruction involve: analyzing requirements, selecting media and designing the instructional events. Additionally the instructor must keep in mind the following learning concepts when developing methods of instruction: (a) Skills should be learned one at a time and each new skill learned should build on previously acquired skills; (b) The analysis phase must identify and describe the prerequisite lower level skills and knowledge required for an instructional objective; (c) Lower level objectives must be mastered before higher level ones; (d) Objectives must be stipulated in concrete behavioral terms; and (e) Positive reinforcement should be used in a repetitive manner. Gagne's (1965) work has made significant contributions to the scientific knowledge base in the field of instructional technology particularly in the area of instructional design. He outlined several steps that should be used to plan and design instruction. These include: (a) Identifying the types of learning outcomes; (b) Each outcome need to have prerequisite knowledge or skills that must be identified; (c) Identifying the internal conditions or processes the learner must have to achieve the outcomes; (d) Identifying the external conditions or instruction needed to achieve the outcomes; (e) Specifying the learning context; (f) Recording the characteristics of the learners; (g) Selecting the media for instruction; (h) Planning to motive the learners; (i) Testing the instruction with learners in the form of formative evaluation; (j) Using summative evaluation to judge the effectiveness of the instruction. Although objectively analyzing the condition for learning Gagné says: “Since the purpose of instruction is learning, the central focus for rational derivation of instructional techniques is the human learner. Development of rationally sound instructional procedures must take into account learner characteristics such as initiate capacities, experimental maturity, and current knowledge states. Such factors become parameters of the design of any particular program of instruction” (Gagné.1987.p.5) From Bruner’s cognitive approach to curriculum design (a proposed teaching approach in which each subject or skill area is revisited at intervals) to Gagne’s conditions of learning we continue to build a platform and make a case for the development of a curriculum to meet the learning needs of young adults in a prison context. Since prison is a “community”, I will borrow from the work of Jean Lave and Etienne Wenger because their characteristics of communities of practice will provide guidance for the practical aspect of the curriculum that is to be developed. Jean Lave and Etienne Wenger’s communities of practice According to Etienne Wenger (2007), three elements are crucial in distinguishing a community of practice from other groups and communities and he proposed them as “the domain”, “the community” and “the practice”. In “the domain” a community of practice is something more than a group of friends or a network of connections between people. It was designed to have an identity defined by a shared domain of interest. Membership therefore implies a commitment to the domain, and as a result, a shared competence that distinguishes members from other people. In pursuing their interest in their domain, members engage in joint activities and discussions, help each other, and share information. They build relationships that enable them to learn from each other, thus creating “the community”. At “the practice” stage, members of a community of practice are practitioners. They develop a shared repertoire of resources which include experiences, stories, tools, ways of addressing recurring problems—in short a shared practice. Getting to this stage takes time and sustained interaction. As is seem from the above, a community of practice involves much more than the technical knowledge or skill associated with undertaking some task. Members are involved in a set of relationships over time (Lave and Wenger 1991: 98) and communities develop around things that matter to people (Wenger 1998). In order to develop a curriculum with learning strategies that would meet the needs of adult learners in a prison setting, the foregoing theories, studies and models have been reviewed. The reason is that no one learning theory can possibly address all the complexities found in the various settings and contexts in which learning can occur. Since this review is undertaken to lay a foundation for the development of a curriculum to meet the needs of inmates that would lead to behavior change, I now turn to the third section of this review which will look at facilitating behavior change through motivating inmates to participate in curriculum activities. Young adults in prison who participate in the curriculum (training activities) will be motivated to learn and behavior change will be facilitated. The term motivation is derived from the Latin word movere, meaning "to move." So, in this third section I will look at the contributions of Reginald Revans (1982, 1983), John Keller (1984), Martin Fishbein and Icek Ajzen (1975), Albert Bandura (1986), J. B. Watson (1913), James O. Prochaska, and Carlo C. DiClemente, (2003) whose theory, methodology and strategy could be used to “move” inmates, provide stimulus and give direction for behavior change. A tool of motivation: action learning Reginald Revans’ (Revans 1982, 1983) developed the Action Learning [7] methodology which holds many similarities to learning communities [8] developed by Lave and Wenger (Lave and Wenger 1991:98). If it is to be distinguished, action learning is basically the small components that create the main team involved in a learning community. It is a learning approach that is used to work with, and develop people, while working with them on a real project or problem. Participants work in small groups or teams, along with a learning coach, and learn how to take action to solve their project or problem, and in the process, learn how to learn from that action. What makes action learning important to the curriculum I am seeking to develop is that it is designed for learning to take place in small groups or “action” groups. The first part of action learning is creating action groups based on programmed learning, "the expert knowledge" and learning or real world experiences. These are small groups, generally consisting The model of Action learning involves learning from experience through reflection and action with the support group. It is important that the groups remain constant and have duration, which would provide the opportunity to establish themselves over a solid time period. This is a model that could be useful in the prison setting. Sustaining motivation in the learning process In support of motivational learning for inmates, it is also important to include aspects of John Keller’s (1984) ARCS Model of Motivational Design for sustaining the learning process. According to John Keller’s (1984) ARCS Model of Motivational Design, there are four steps for promoting and sustaining motivation in the learning process and these are Attention, Relevance, Confidence, Satisfaction (ARCS). These are explained in the following summary: Attention can be gained in one of two ways. One way is perceptual arousal which uses novel, surprising, incongruous, and uncertain events to gain interest; the other is inquiry arousal which stimulates curiosity by posing challenging questions or problems to be solved. Relevance is established in order to increase a learner’s motivation. This requires the use of concrete language and examples with which the learners are familiar and for this, Keller described six major strategies for application. Confidence can help students understand their likelihood for success. If they feel they cannot meet the objectives or that the cost (time or effort) is too high, their motivation will decrease. It is therefore important to provide objectives and prerequisites to help students estimate the probability of success by presenting performance requirements and evaluation criteria Satisfaction means that learning must be rewarding or satisfying in some way, whether it is from a sense of achievement, praise, or mere entertainment. It must make the learner feel as though the skill is useful or beneficial by providing opportunities to use newly acquired knowledge in a real setting. But, in order to be successful in applying newly acquired skills, inmates would need to develop the appropriate attitude and behavior that would enable this process. I have included Fishbein and Ajzen’s (1975) theory of reasoned action which will provide some helpful information.. Since the theory of reasoned action assumes that individuals consider a behaviour’s consequences before performing the particular behavior, then In 1985, Ajzen expanded upon the Theory of Reasoned Action, formulating the Theory of Planned Behavior, which also emphasizes the role of intention in behavior performance but is intended to cover cases in which a person is not in control of all factors affecting the actual performance of a behavior. As a result, the new theory states that the incidence of actual behavior performance is proportional to the amount of control an individual possesses over the behavior and the strength of the individual’s intention in performing the behavior. In his article, Ajzen (1985) further hypothesizes that self-efficacy is important in determining the strength of the individual’s intention to perform a behavior. In support of Ajzen’s hypothesis, I will turn to Albert Bandura for his views on self-efficacy. Self-efficacy theory In 1986, with the publication his book Social Foundations of Thought and Action: A Social Cognitive Theory, Bandura advanced a view of human functioning that accorded a central role to cognitive, vicarious, self-regulatory, and self-reflective processes in human adaptation and change. People were viewed as self-organizing, proactive, self-reflecting and self-regulating rather than as reactive organisms shaped and shepherded by environmental forces or driven by concealed inner impulses. From this theoretical perspective, human functioning was viewed as the product of a dynamic interplay of personal, behavioral, and environmental influences. For example, how people interpret the results of their own behavior informs and alters their environments and the personal factors they possess which, in turn, inform and alter subsequent behavior. Taking Bandura’s theory into consideration, it is logical to concede that perceived self-efficacy is defined as people's beliefs about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives. Self-efficacy beliefs therefore determine how people feel, think, motivate themselves and behave. Such beliefs produce these diverse effects through the cognitive, motivational, affective and selection processes. I will include some brief insights from J. B. Watson’s behaviorism which explains the relationship between stimuli, response and reward (or punishment) as part of the theories of learning. Theories of learning and behavior change John B. Watson coined the term "Behaviorism" in 1913 and this theory assumes that behavior is observable and can be correlated with other observable events. Thus, there are events that precede and follow behavior. Behaviorism's goal is to explain relationships between antecedent conditions (stimuli), behavior (responses), and consequences (reward, punishment, or neutral effect). Changes in behavior produced by stimuli that either signify events to come or indicate probable response consequences also have been shown to rely heavily on cognitive representations of contingencies. People are not much affected by paired stimulation unless they recognize that the events are correlated (Dawson & Furedy, 1976; Grings, 1973). Looking at the Behavioural change theories and models we see that these. Learning theories/behavior analytic theories of change, social learning/social cognitive theory, theory of reasoned action, theory of planned behavior and transtheoretical/stages of change model all attempt to explain behavior change in human beings. The discussion thus far has examined the role of cognition in the acquisition and regulation of behavior. Motivation, which is primarily concerned with activation and persistence of behavior, is also partly rooted in cognitive activities. The capacity to represent future consequences in thought provides one cognitively based source of motivation. A second cognitively based source of motivation operates through the intervening influences of goal setting and self-evaluative reactions (Bandura, 1976b, 1977). Self motivation involves standards against which to evaluate performance. By making self-rewarding reactions conditional on attaining a certain level of behavior, individuals create self-inducements to persist in their efforts until their performances match self-prescribed standards. James Prochaska and Carlo DiClemente (2003) developed a model that borrowed from many counseling theories in an attempt to understand behavior. I will now take a look at their model for insights into the process of behavior change. A model for change: Empirical basis for education in correctional systems James O. Prochaska, and Carlo C. DiClemente, (2003) developed The Transtheoretical Model (TTM) by studying daily human experiences and integrating existing psychotherapy models. It was named transtheoretical because it combines change variables from across many existing counseling theories. In the early stages of their research, Prochaska and his colleagues found that behavior changes are more complicated than those described by many theories. TTM has shown through 20 years of research that behavior change is a process, not an event. People progress through five distinct stages: pre-contemplation (not intending to take action within the next 6 months), contemplation (intending to take action within the next 6 months), preparation (intending to take action within the next 30 days), action (made overt changes less than six months ago), and maintenance (made overt changes more than six months ago). The Model also identifies ten major processes of change: consciousness-raising, social liberation, dramatic relief, environment reevaluation, self-reevaluation, self liberation, counter-conditioning, stimulus control, contingency management, and helping relationships. The model also specifies the relationship between the change processes and stages. In the early stages, people apply experiential processes that are cognitive, affective, and evaluative to progress through the stages. In later stages, people rely more on the behavioral processes of conditioning, contingency management, environmental controls, and support for progressing toward termination (Prochaska, Redding, & Evers, 1996). The key to fostering successful change is to understand what stage a person is in and then decide what strategies (processes) she or he should use to move forward. The aim of this part of the review is to establish a model for learning within the context of a prison which can be used to bring about behavior change. As such, the researcher reviewed the research conducted by many authors in this field and what their findings imply for behavior change in a prison context. The model of Prochaska and DiClemente is one that seemed suited to this process. In using the transtheoretical model to work with offenders, the implication for the processes of change dimension would involve understanding ‘how' individuals change their behavior. This will include the cognitive, affective, evaluative and behavioral strategies that an individual may use to modify the problem behavior. The stages of change (SOC) make up another dimension of the transtheoretical model; the temporal dimension identifying the ‘when’ part of the change equation. Individuals are thought to progress through the SOC at different rates and whereas the time to progress through the stages is variable, the `set of tasks' which have to be accomplished at each stage of change are less variable. It would be important for me to bear in mind when using this model that both self-efficacy and decisional balance are two components of this transtheoretical model; the application of curriculum activities would have to balance the relationship between the process and stages of change to have a beneficial outcome for offenders. I see prison education as part of a broader approach to rehabilitation and as such, a curriculum designed to address basic learning needs must consider the full range of needs of the prisoner. Prison education does not take place in isolation, and its purpose cannot be understood in isolation from these wider issues. The breadth of the education curriculum is important and the wider benefits of learning should not be sacrificed due to an overemphasis on employability skills. It is for this reason that the theories, teaching strategies and models have been reviewed and included so that many techniques and strategies could be considered for the development and application of the curriculum modules. Based on the literature reviewed and all that has been discussed, the research questions could possibly be: What teaching strategies can be developed to address the learning needs of young adults, 18-35 years in prison? (Knowles 1970) What learning activities can be used in the teaching strategies to address the learning needs of these young adults in prison? ( Revans 1982) How can these young adults in prison be motivated to participate in, and sustain, learning activities that will develop self-efficacy and achieve behavior change? (Keller 1984, Fishbein & Ajzen 1975, Bandura 1986) How can a curriculum be developed to meet the learning needs of young adults in prison that would have teaching strategies that would sustain learning activities, develop self efficacy and promote behavior change? (Gagne 1965; Bruner 1959; Prochaska & DiClemente 2003) How can the prison system support the implementation of a curriculum designed to meet the learning needs of young adults in prison in order for them to achieve behavior:
http://www.ukessays.com/essays/education/reducing-crime-rates-raising-the-education-of-prisoners-education-essay.php
CC-MAIN-2014-15
en
refinedweb
Tim Peters wrote: >>Unfortunately, only so in Python 2.5. If the code is also >>meant to be used for earlier versions, it won't work there >>at all. > > > Does that matter? I believe it does: the ctypes maintainer wants to keep the code identical across releases (AFAICT). > has Py_ssize_t all over the place, and that's unique to 2.5 (so far) too, right? Not necessarily. For example, Modules/_elementtree.c has /* compatibility macros */ #if (PY_VERSION_HEX < 0x02050000) typedef int Py_ssize_t; #define lenfunc inquiry #endif I believe ctypes "wants" to take that approach also. Regards, Martin
https://mail.python.org/pipermail/python-dev/2006-March/062418.html
CC-MAIN-2014-15
en
refinedweb
Hi experts, I've got a problem with ArcGIS for Sharepoint and the Add-WMS-Plugin from the SDK. The plugin ist working and the WMS-layer I want to add is displayed over my base-map. Fine. But the layer is not displayed (or not added) to the "Map-Contents"-section and so it's hard to work with this layer. I have no clue were the problem is, but I think it must be in WMSCommand.xaml.cs. I've done some little changes to this file, because my WMS-server does not match the name-schema from the example (not the same ending of the link) and my code looks no like this: The WMS-link I want to add looks like this: WMS-link I want to add looks like this: System; using System.Collections.Generic; using System.Windows; using System.Windows.Controls; using ESRI.ArcGIS.Client.Extensibility; using ESRI.ArcGIS.Client.Toolkit.DataSources; namespace WMSCommand { public partial class WMSDialog : UserControl { public WMSDialog() { InitializeComponent(); } private WmsLayer wmsLayer; private void Add_Click(object sender, RoutedEventArgs e) { wmsLayer = new WmsLayer(); wmsLayer.Url = ((ComboBoxItem)this.WMSList.SelectedItem).Tag.ToString(); // Copy the ProxyHandler.ashx file to the following location on the SharePoint Server: // <Program Files>\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS // Supply the Url to the proxy. Substitute the domain of your SharePoint server. wmsLayer.ProxyUrl = ""; // In order to get the names of the layers, it needs to call getcapabilities wmsLayer.SkipGetCapabilities = false; wmsLayer.Version = "1.1.1"; wmsLayer.Initialized += WMSLayer_Initialized; string layerName = "WMS Layer"; if (wmsLayer.Url.ToLower().EndsWith("")) { string[] splits = wmsLayer.Url.Split('/'); layerName = splits[splits.Length - 1]; } wmsLayer.SetValue(MapApplication.LayerNameProperty, layerName); MapApplication.Current.Map.Layers.Add(wmsLayer); } /// <summary> /// In the initialized event, the getcapabilities request returns the names of the layers /// </summary> private void WMSLayer_Initialized(object sender, EventArgs args) { List<string> layerNames = new List<string>(); foreach (WmsLayer.LayerInfo layerInfo in wmsLayer.LayerList) { if (layerInfo.Name != null) { layerNames.Add(layerInfo.Name); } } wmsLayer.Layers = layerNames.ToArray(); } } } I have tried to delete the if-statement and split just any URL I add, but this does not solve the problem. Help would be really appreciated! Thanks in advance! EDIT: I just noticed that the layer-name ist displayed in the following dialog and if I want to remove the layer. But just not in Map-Contents on the right-side of the map. Bookmarks
http://forums.arcgis.com/threads/51518-WMS-not-in-Map-Contents?p=175897
CC-MAIN-2014-15
en
refinedweb
Introducing XML Serialization Serialization is the process of converting an object into a form that can be readily transported. For example, you can serialize an object and transport it over the Internet using HTTP between a client and a server. On the other end, deserialization reconstructs the object from the stream. XML serialization serializes only the public fields and property values of an object into an XML stream. XML serialization does not include type information. For example, if you have a Book object that exists in the Library namespace, there is no guarantee that it will be deserialized into an object of the same type. The central class in XML serialization is the XmlSerializer class, and its most important methods. The XML stream generated by the XmlSerializer is compliant with the World Wide Web Consortium () XML Schema definition language (XSD) 1.0 recommendation. Furthermore, the data types generated are compliant with the document titled "XML Schema Part 2: Datatypes." to generate the classes based on an existing XML Schema. If you have an XML Schema, you can run the XML Schema Definition tool to produce a set of classes that are strongly typed to the schema and annotated with attributes. When an instance of such a class is serialized, the generated XML adheres to the XML Schema. Provided with such a class, you can program against an easily manipulated object model while being assured that the generated XML conforms to the XML schema. This is an alternative to using other classes in the .NET Framework, such as the XmlReader and XmlWriter classes, to parse and write an XML stream. For more information, see XML Documents and Data. These classes allow you to parse any XML stream. In contrast, use the XmlSerializer when the XML stream is expected to conform to a known XML Schema. Attributes control the XML stream generated by the XmlSerializer class, allowing you to set the XML namespace, element name, attribute name, and so on, of the XML stream. For more information about these attributes and how they control XML serialization, see Controlling XML Serialization Using Attributes. For a table of those attributes that are used to control the generated XML, see Attributes That Control XML Serialization. The XmlSerializer class can further serialize an object and generate an encoded SOAP XML stream. The generated XML adheres to section 5 of the World Wide Web Consortium document titled "Simple Object Access Protocol (SOAP) 1.1." For more information about this process, see How to: Serialize an Object as a SOAP-Encoded XML Stream. For a table of the attributes that control the generated XML, see Attributes That Control Encoded SOAP Serialization. The XmlSerializer class generates the SOAP messages created by, and passed to, XML Web services. To control the SOAP messages, you can apply attributes to the classes, return values, parameters, and fields found in an XML Web service file (.asmx). You can use both the attributes listed in "Attributes That Control XML Serialization" and "Attributes That Control Encoded SOAP Serialization" because an XML Web service can use either the literal or encoded SOAP style. For more information about using attributes to control the XML generated by an XML Web service, see XML Serialization with XML Web Services. For more information about SOAP and XML Web services, see Customizing SOAP Message Formatting. Security Considerations for XmlSerializer Applications When creating an application that uses the XmlSerializer, you should be aware of the following items and their implications: The XmlSerializer creates C# (.cs) files and compiles them into .dll files in the directory named by the TEMP environment variable; serialization occurs with those DLLs. The code and the DLLs are vulnerable to a malicious process at the time of creation and compilation. When using a computer running Microsoft Windows NT 4.0 or later, it might be possible for two or more users to share the temp directory. Sharing a temp directory is dangerous if the two accounts have different security privileges, and the higher-privilege account runs an application using the XmlSerializer. In this case, one user can breach the computer's security by replacing either the .cs or .dll file that is compiled. To eliminate this concern, always be sure that each account on the computer has its own profile. any code using any type given to it. There are two ways in which a malicious object presents a threat. It could run malicious code, or it could inject malicious code into the C# file created by the XmlSerializer. In the first case, if a malicious object tries to run a destructive procedure, code access security helps prevent any damage from being done. In the second case, there is a theoretical possibility that a malicious object may somehow inject code into the C# file created by the XmlSerializer. Although this issue has been examined thoroughly, and such an attack is considered unlikely, you should take the precaution of never serializing data with an unknown and untrusted type. Serialized sensitive data might be vulnerable. After the XmlSerializer has serialized data, it can be stored as an XML file or other data store. If your data store is available to other processes, or is visible on an intranet or the Internet, the data can be stolen and used maliciously. For example, if you create an application that serializes orders that include credit card numbers, the data is highly sensitive. To help prevent this, always protect the store for your data and take steps to keep it private. Serialization of a Simple Class The following code example shows a simple class with a public field. When an instance of this class is serialized, it might resemble the following. For more examples of serialization, see Examples of XML Serialization. Items That Can Be Serialized The following items can be serialized using the XmLSerializer class: Public read/write properties and fields of public classes Classes that implement ICollection or IEnumerable XmlElement objects XmlNode objects DataSet objects For more information about serializing or deserializing objects, see How to: Serialize an Object and How to: Deserialize an Object. Advantages of Using XML Serialization The XmlSerializer class gives you complete and flexible control when you serialize an object as XML. If you are creating an XML Web service, you can apply attributes that control serialization to classes and members to ensure that the XML output conforms to a specific schema. For example, XmlSerializer enables you to: Specify whether a field or property should be encoded as an attribute or an element. Specify an XML namespace to use. Specify the name of an element or attribute if a field or property name is inappropriate. Another advantage of XML serialization is that you have no constraints on the applications you develop, as long as the XML stream that is generated conforms to a given schema. Imagine a schema that is used to describe books. It features a title, author, publisher, and ISBN number element. You can develop an application that processes the XML data in any way you want, for example, as a book order, or as an inventory of books. In either case, the only requirement is that the XML stream conforms to the specified XML Schema definition language (XSD) schema. XML Serialization Considerations The following should be considered when using the XmlSerializer class: The, as follows. A class that implements IEnumerable must implement a public Add method that takes a single parameter. The Add method's parameter must be consistent (polymorphic) with the type returned from the IEnumerator.Current property returned from the GetEnumerator method. A class that implements ICollection in addition to IEnumerable (such as CollectionBase) must have a public Item indexed property (an indexer in C#) that takes an integer, and it must have a public Count property of type integer. The parameter passed to the Add method must be the same type as that returned from the Item property, or one of that type's bases. For classes implementing ICollection, values to be serialized are retrieved from the indexed Item property rather than by calling GetEnumerator. Also, public fields and properties are not serialized, with the exception of public fields that return another collection class (one that implements ICollection). For an example, see Examples of XML Serialization. XSD Data Type Mapping The World Wide Web Consortium () document titled "XML Schema Part 2: Datatypes" specifies the simple data types that are allowed in an XML Schema definition language (XSD) schema. For many of these (for example, int and decimal), there is a corresponding data type in the .NET Framework. However, some XML data types do not have a corresponding data type in the .NET Framework (for example, the NMTOKEN data type). In such cases, if you use the XML Schema Definition tool (XML Schema Definition Tool (Xsd.exe)) to generate classes from a schema, an appropriate attribute is applied to a member of type string, and its DataType property is set to the XML data type name. For example, if a schema contains an element named "MyToken" with the XML data type NMTOKEN, the generated class might contain a member as shown in the following example. Similarly, if you are creating a class that must conform to a specific XML Schema (XSD), you should apply the appropriate attribute and set its DataType property to the desired XML data type name. For a complete list of type mappings, see the DataType property for any of the following attribute classes:
http://msdn.microsoft.com/en-us/library/182eeyhh(VS.85).aspx
CC-MAIN-2014-15
en
refinedweb
Difference between revisions of "FAQ How do I write to the console from a plug-in?" From Eclipsepedia Latest revision as of 16:08, 8 February 2011 Many of the people asking this question are confused by the fact that two Eclipse instances are in use when you are developing plug-ins. One is the development platform you are using as your IDE, and the other is the target platformalso known as the runtime workbenchconsisting of the plug-ins in the development workbench you are testing against. When a plug-in in the target platform writes a message to System.out or System.err, the message appears in the Console view of the development platform. This view emulates the Java console that appears when Eclipse runs under Windows with java.exe. You should be writing to the console only in this manner when in debug mode (see FAQ_How_do_I_use_the_platform_debug_tracing_facility?). In some situations however, a plug-in in the development platform has a legitimate reason to write to the development platform Console view. Some tools originally designed for the command line, such as Ant and CVS, traditionally use console output as a way of communicating results to the tool user. When these tools are ported for use with an IDE, this console output is typically replaced with richer forms of feedback, such as views, markers, and decorations. However, users accustomed to the old command-line output may still want to see this raw output as an alternative to other visual forms of feedback. Tools in this category can use the Console view to write this output. Prior to Eclipse 3.0, each plug-in that wanted console-like output created its own Console view. Eclipse 3.0 provides a single generic Console view that all plug-ins can write to. The view can host several console documents at once and allows the user to switch between different console pages. Each page in the console is represented by an org.eclipse.ui.console.IConsole object. To write to the console, you need to create your own IConsole instance and connect it to the Console view. To do this, you have to add a new dependency to org.eclipse.ui.console in the plugin.xml of your plugin. For a console containing a simple text document, you can instantiate a MessageConsole instance. Here is a method that locates a console with a given name and creates a new one if it cannot be found: private MessageConsole findConsole(String name) { ConsolePlugin plugin = ConsolePlugin.getDefault(); IConsoleManager conMan = plugin.getConsoleManager(); IConsole[] existing = conMan.getConsoles(); for (int i = 0; i < existing.length; i++) if (name.equals(existing[i].getName())) return (MessageConsole) existing[i]; //no console found, so create a new one MessageConsole myConsole = new MessageConsole(name, null); conMan.addConsoles(new IConsole[]{myConsole}); return myConsole; } Once a console is created, you can write to it either by directly modifying its IDocument or by opening an output stream on the console. This snippet opens a stream and writes some text to a console: MessageConsole myConsole = findConsole(CONSOLE_NAME); MessageConsoleStream out = myConsole.newMessageStream(); out.println("Hello from Generic console sample action"); Creating a console and writing to it do not create or reveal the Console view. If you want to make that sure the Console view is visible, you need to reveal it using the usual workbench API. Even once the Console view is revealed, keep in mind that it may contain several pages, each representing a different IConsole provided by a plug-in. Additional API asks the Console view to display your console. This snippet reveals the Console view and asks it to display a particular console instance: IConsole myConsole = ...;// your console instance IWorkbenchPage page = ...;// obtain the active page String id = IConsoleConstants.ID_CONSOLE_VIEW; IConsoleView view = (IConsoleView) page.showView(id); view.display(myConsole); [edit] See Also: FAQ_How_do_I_use_the_platform_debug_tracing_facility?.
http://wiki.eclipse.org/index.php?title=FAQ_How_do_I_write_to_the_console_from_a_plug-in%3F&diff=prev&oldid=237919
CC-MAIN-2014-15
en
refinedweb
"We... So what was the problem that required such an unusual solution? An engineer explained, that they delivered version 1.0 of their API with a method void foo(String arg) (see Listing 1): package api; public class API { public static void foo(String arg) { System.out.println("Now in : void API.foo(String)"); System.out.println("arg = " + arg); } } Some time later, they delivered version 2.0 of the API where they accidently changed the signature of foo to void foo(String arg) (see Listing 2): package api; public class API { public static Object foo(String arg) { System.out.println("Now in : Object API.foo(String)"); System.out.println("arg = " + arg); return null; } } Unfortunately they didn't realize this just until a client complained that one of their applications didn't worked anymore because a third party library which they where using (and of which they had no source code!) was compiled against version 1.0 of the API. This was similar to the test program shown in Listing 3: import api.API; public class Test { public static String callApiMethod() { System.out.println("Calling: void API.foo(String)"); API.foo("hello"); System.out.println("Called : void API.foo(String)"); return "OK"; } public static void main(String[] args) { System.out.println(callApiMethod()); } } If compiled and run against the old API the Test class will run as follows: > javac -cp apiOld Test.java > java -cp apiOld Test Calling: void API.foo(String) Now in : void API.foo(String) arg = hello Called : void API.foo(String) OK However, if compiled against the old API shown in Listing 1 and run against the new API from Listing 2, it will produce a NoSuchMethodError: > javac -cp apiOld Test.java > java -cp apiNew Test Calling: void API.foo(String) Exception in thread "main" java.lang.NoSuchMethodError: api.API.foo(Ljava/lang/String;)V at Test.callApiMethod(Test.java:11) at Test.main(Test.java:17) Unfortunately, at this point it was already impossible to revert the change in foo's signature, because there already existed a considerable number of new client libraries which were compiled against version 2.0 and depended on the new signature. Our engineer now asked to "hack" the Java VM such that calls to the old version of foo get redirected to the new one if version 2.0 of the API is used. Hacking the VM for such a purpose is of course out of question. But they asked so nicely and I had already heard of bytecode instrumentation and rewriting for so many times in the past without ever having the time to try it out that I finally decided to help them out with the hack they requested. There were two possible solutions I could think of: statically edit the offending class files and rewrite the calls to the old API with calls to the new one (remember that the client had no sources for the library which caused the problems). This solution had two drawbacks: first, it would result in two different libraries (one compatible with the old and one compatible with the new API) and second, it had to be manually repeated for each such library and it was unknown, what other libraries could cause this problem. A better solution would be to dynamically rewrite the calls at runtime (i.e. at load time, to be more exact) only if needed (i.e. if a library which was compiled against the old API is running with the new one). This solution is more general, but it has the drawback of introducing a small performance penalty because all classes have to be scanned for calls to the old API method at load time. I decided to use dynamic instrumentation, but then again there were (at least) two possibilities how this could be implemented. First, Java 5 introduced a new Instrumentation API which serves exactly our purpose, namely "..to instrument programs running on the JVM. The mechanism for instrumentation is modification of the byte-codes of methods". Second, there has always been the possibility to use a custom class loader which alters the bytecodes of classes while they are loaded. I'll detail both approaches here: The Java Instrumentation API is located in the java.lang.instrument package. In order to use it, we have to define a Java programming language agent which registers itself with the VM. During this registration, it receives an Instrumentation object as argument which among other things can be used to register class transformers (i.e. classes which implement the ClassFileTransformer interface) with the VM. A Java agent can be loaded at VM startup with the special command line option -javaagent:jarpath[=options] where jarpath denotes the jar-file which contains the agent. The jar-file must contain a special attribute called Premain-Class in its manifest which specifies the agent class within the jar-file. Similar to the main method in a simple Java program, an agent class has to define a so called premain method with the following signature: public static void premain(String agentArgs, Instrumentation inst). This method will be called when the agent is registered at startup (before the main method) and gives the agent a chance to register class transformers with the instrumentation API. The following listing shows the Premain-Class class of our instrumentation agent: package instrumentationAgent; import java.lang.instrument.Instrumentation; public class ChangeMethodCallAgent { public static void premain(String args, Instrumentation inst) { inst.addTransformer(new ChangeMethodCallTransformer()); } } A class file transformer has to implement the ClassFileTransformer interface which defines a single transform method. The transform takes quite some arguments from which we only need the classfileBuffer which contains the class file as a byte buffer. The class transformer is now free to change the class definition as long as the returned byte buffer contains another valid class definition. Listing 5 shows our minimal ChangeMethodCallTransformer. It calls the real transformation method Transformer.transform which operates on the bytecodes and replaces calls to the old API method with calls to the new version of the method. The Transformer class will be described in a later section of this article (see Listing 8). package instrumentationAgent; import bytecodeTransformer.Transformer; import java.lang.instrument.ClassFileTransformer; import java.lang.instrument.IllegalClassFormatException; import java.security.ProtectionDomain; public class ChangeMethodCallTransformer implements ClassFileTransformer { public byte[] transform(ClassLoader loader, String className, Class> classBeingRedefined, ProtectionDomain protectionDomain, byte[] classfileBuffer) throws IllegalClassFormatException { return Transformer.transform(classfileBuffer); } } For the sake of completeness, Listing 6 shows the manifest file which is used to create the instrumentation agent jar-file. ChangeMethodCallAgent is defined to be the premain class of the agent. Notice that we have to put asm-3.1.jar in the boot class path of the agent jar-file, because it is needed by our actual transform method. Manifest-Version: 1.0 Premain-Class: instrumentationAgent.ChangeMethodCallAgent Boot-Class-Path: asm-3.1.jar If we run our test application with the new instrumentation agent, we will not get an error anymore. You can see the output of this invocation in the following listing: > java -cp apiNew:asm-3.1.jar:bytecodeTransformer.jar:. -javaagent:instrumentationAgent.jar Test Calling: void API.foo(String) Now in : Object API.foo(String) arg = hello Called : void API.foo(String) OK Another possibility to take control over and alter the bytecodes of a class is to use a custom class loader. Dealing with class loaders is quite tricky and there are numerous publications which deal with this topic (e.g. References [2], [3], [4]). One important point is to find the right class loader in the hierarchy of class loaders which is responsible for the loading of the classes which we want to transform. Especially in Java EE scenarios which can have a lot of chained class loaders this may be not an easy task. But once this class loader is identified, the changes which have to be applied in order to make the necessary bytecode transformations are trivial. For this example I will write a new system class loader. The system class loader is responsible for loading the application and it is the default delegation parent for new class loaders. If the system property java.system.class.loader is defined at VM startup then the value of that property is taken to be the name of the system class loader. It will be created with the default system class loader (which is a implementation-dependent instance of ClassLoader) as the delegation parent. The following listing shows our simple system class loader: package systemClassLoader; import bytecodeTransformer.Transformer; import java.io.ByteArrayOutputStream; import java.io.InputStream; public class SystemClassLoader extends ClassLoader { public SystemClassLoader(ClassLoader parent) { super(parent); } @Override public Class> loadClass(String name, boolean resolve) throws ClassNotFoundException { if (name.startsWith("java.")) { // Only bootstrap class loader can define classes in java.* return super.loadClass(name, resolve); } try { ByteArrayOutputStream bs = new ByteArrayOutputStream(); InputStream is = getResourceAsStream(name.replace('.', '/') + ".class"); byte[] buf = new byte[512]; int len; while ((len = is.read(buf)) > 0) { bs.write(buf, 0, len); } byte[] bytes = Transformer.transform(bs.toByteArray()); return defineClass(name, bytes, 0, bytes.length); } catch (Exception e) { return super.loadClass(name, resolve); } } } In fact we only have to extend the abstract class java.lang.ClassLoader and override the the loadClass method. Inside loadClass, we immediately bail out and return the output of the superclass version of loadClass, if the class name is in the java package, because only the bootstrap class loader is allowed to defined such classes. Otherwise we read the bytecodes of the requested class (again by using the superclass methods), transform them with our Transformer class (see Listing 8) and finally call defineClass with the transformed bytecodes to generate the class. The transformer, which will be presented in the next section, takes care of intercepting all calls to the old API method and replaces it with calls to the method in the new API. If we run our test application with the new system class loader, we will succeed again without any error. You can see the output of this invocation in the following listing: > java -cp apiNew:asm-3.1.jar:bytecodeTransformer.jar:systemClassLoader.jar:. \ -Djava.system.class.loader=systemClassLoader.SystemClassLoader Test Calling: void API.foo(String) Now in : Object API.foo(String) arg = hello Called : void API.foo(String) OK After I have demonstrated two possibilities how bytecode instrumentation can be applied to a Java application, it is finally time to show how the actual rewriting takes place. This is fortunately quite easy today, because with ASM, BCEL and SERP to name just a few, there exist some quite elaborate frameworks for Java bytecode rewriting. As detailed by Jari Aarniala in his excellent paper "Instrumenting Java bytecode", ASM is the smallest and fastest out of these libraries, so I decided to use it for this project. ASM's architecture is based on the visitor pattern which makes it not only very fast, but also easy to extend. Listing 8 finally shows the Transfomer class which was used in the instrumentation agent (see Listing 5) and in our custom class loader (see Listing 7). package bytecodeTransformer; import org.objectweb.asm.ClassReader; import org.objectweb.asm.ClassWriter; public class Transformer { public static byte[] transform(byte[] cl) { ClassWriter cw = new ClassWriter(ClassWriter.COMPUTE_FRAMES); ChangeMethodCallClassAdapter ca = new ChangeMethodCallClassAdapter(cw); ClassReader cr = new ClassReader(cl); cr.accept(ca, 0); return cw.toByteArray(); } } The public static transform method takes a byte array with a java class definition as input argument. These bytecodes are fed into an ASM ClassReader object which parses the bytecodes and allows a ClassVisitor object to visit the class. In our case, this class visitor is an object of type ChangeMethodCallClassAdapter which is derived from ClassAdapter. ClassAdapter is a convenience class visitor which delegates all visit calls to the class visitor object which it takes as argument in its constructor. In our case we delegate the various visit methods to a ClassWriter with the exception of the visitMethod method (see Listing 9). package bytecodeTransformer; import org.objectweb.asm.ClassAdapter; import org.objectweb.asm.ClassVisitor; import org.objectweb.asm.MethodVisitor; public class ChangeMethodCallClassAdapter extends ClassAdapter { public ChangeMethodCallClassAdapter(ClassVisitor cv) { super(cv); } @Override public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) { MethodVisitor mv; mv = cv.visitMethod(access, name, desc, signature, exceptions); if (mv != null) { mv = new ChangeMethodCallAdapter(mv); } return mv; } } We are only interested in the methods of a class because our api.API.foo method can only be called from within another method. Notice that static initializers are grouped together in the generated The ChangeMethodCallAdapter is finally the place, where the bytecode rewriting will take place. Again, ChangeMethodCallAdapter expands the generic MethodAdapter which by default passes all bytecodes to its class writer delegate. The only exception here is the visitMethodInsn which will be called for every bytecode instruction that invokes a method. package bytecodeTransformer; import org.objectweb.asm.MethodAdapter; import org.objectweb.asm.MethodVisitor; import org.objectweb.asm.Opcodes; public class ChangeMethodCallAdapter extends MethodAdapter { public ChangeMethodCallAdapter(MethodVisitor mv) { super(mv); } @Override public void visitMethodInsn(int opcode, String owner, String name, String desc) { if ("api/API".equals(owner) && "foo".equals(name) && "(Ljava/lang/String;)V".equals(desc)) { mv.visitMethodInsn(opcode, owner, name, "(Ljava/lang/String;)Ljava/lang/Object;"); mv.visitInsn(Opcodes.POP); } else { mv.visitMethodInsn(opcode, owner, name, desc); } } } In visitMethodInsn (see Listing 10), we look for methods named foo with a receiver object of type API and a signature equal to (Ljava/lang/String;)V (e.g. a String argument and a void return value). These are exactly the calls to the old version of foo which we want to patch. To finally patch it, we call our delegate with the same receiver and method name, but with the changed signature. We also have to insert a new POP bytecode after the call, because the new version of foo will return an Object which wouldn't be handled otherwise (because the following code doesn't expect foo to return a value because it was compiled against the old API (see Listing 1)). That's it - all the other calls and bytecode instructions will be copied verbatim by the class writer to the output byte array! This article should by no means encourage you to be lazy with your API design and specification. It's always better to prevent problems as described in this article by good design and even better testing (e.g. signature tests of all publicly exposed methods). I also don't claim that the "hack" presented here is a good solution for the above problem - it was just fun to see what's possible today in Java with very little effort! You can download the complete source code of this example together with a self explaining Ant file from here: hack.zip I want to thank Jari Aarniala for his very interesting, helpful and concise article "Instrumenting Java bytecode" which helped me a lot to get started with this topic! [1] Instrumenting Java bytecode by Jari Aarniala [2] Internals of Java Class Loading by Binildas Christudas [3] Inside Class Loaders by Andreas Schaefer [4] Managing Component Dependencies Using ClassLoaders by Don Schwarz [5] ASM homepage [6] ASM 3.0 A Java bytecode engineering library (tutorial in pdf format) [7] BCEL The Byte Code Engineering Library [8] SERP framework for manipulating Java bytecode [9] hack.zip - the source code from this article - Login or register to post comments - Printer-friendly version - simonis's blog - 5324 reads by erdalkaraca - 2009-02-24 16:38the aspectj weaver might be of interest to those not familiar with byte code manipulation; it provides load-time weaving (byte code manipulation at load time)... by pdoubleya - 2009-02-20 13:27Very nice work, Volker. This is a really helpful intro to the various tools--kudos, you couldn't have made it any simpler without ruining the effect. Cheers! Patrick
https://weblogs.java.net/node/241571/atom/feed
CC-MAIN-2014-15
en
refinedweb
Hi, When in a paragraph, I apply a soft return (shift+return) to force a line break, and that the paragraph's alignment is set to justify, InDesign fully justifies the line where the forced line break is inserted. The behaviour I would expect (that I'm looking for) is for InDesign to simply break the line at that point on the line without further justifying...almost as if it was the end of the paragraph. (If I left aligned instead of justified with last line aligned left the paragraph, it would be fine) The reason I would like such behaviour is to have a new line within the same paragraph, but without the space before or space after of that paragraph. I guess I could accomplish this by creating several paragraph styles to be used in the same paragraph, as follows: 1- paragraph style P1: paragraph style with space before and space after (to be applied whenere there is no soft return within the paragraph) 2- top paragraph style: the paragraph style P1 with the space before and zero space after 3- middle paragraph style: the same paragraph style P1 with zero space before and zero space after 4- bottom paragraph style: the same paragraph style P1 with the space after and zero space before Obviously, this adds a lot of complexity to a single paragraph and is also difficult to keep trakc of in case changes are to be made to the paragraph later (e.g. removing the soft return would imply re-applying paragraph styles to other parts of the paragraph again). For instance, if there is only one soft return in the paragraph, I would apply #2 to the part before the line break and #4 to the part after the line break. But if there are two soft returns in the paragraph, I would apply #2 to part before 1st line break, #3 to part after 1st line break and before 2nd line break, #4 to part after 2nd line break! I have come across threads here reporting this issue but am not clear on what the best (better) workarounds are. This is for CS4. Thanks. What you are seeing is absolutely normalit's your workflow that flawed. Soft returns should be avoided except in rare instances. Using them as a matter of course is a recipes for disaster. You'll need to create more than one paragraph style and apply them appropriately .! I agree completely with Bob's statement. But if you want to do what you want to do, write SHIFT+Tab before the SHIFT-RETURN. (I don't like the expression soft return, nothing soft, it is a forced line break!) Since you're manually tweaking where specific lines break, consider applying the No Break property instead of the forced line break. You can create a character style that applies No Break to selected text, apply No Break to a selection as an override rather than as a character style. Alternatively, you can replace one or more normal spaces with non-breaking space characters, to force two or more words to move to the next line if they don't fit on the current line. Also, have you tried using the Single-Line Composer instead of the Paragraph Composer? Search Google for "indesign single line composer " without quotes for details. HTH Regards, Peter Gold KnowHow ProServices sPretzel wrote:! Thanks for the replies. peter at knowhowpro, there is a thread on here discussing a problem that is close but different to mine, dealing with forcing a new line within a sentence (and no-break). What you are suggesting appears to be directed towards that issue. Willi Adelberger, I tried what you suggested and it works fine (right indent tab). It scares me a little to introduce tab in this context but I will look further into it to see if it breaks something else.! Maybe italics, or bold (or bold italic). A change of font might also work (change to serif, or san serif - opposite of your body text) A screen shot of an example would be helpful in making pertinent suggestions. sPretzel wrote:! A tab scares you and forced line breaks don't? IMO, I think you need to revisit your thought process here. It is not a TAB it is a RIGHT INTENTED TAB, which will foll the rest of the line with free space up to the next character or character group which is in your case the forced return. A justified paragraph will always try to fill up the line and reach the right margin. But you can easily control this by inserting a flush space after the last word on the line and the line break. The flush space will "Eat up" all the extra space and that perticular line will behave as if that one line was left aligned and not justified. Does this solve your problem? I exactly know the situation you are talking about. I encounter it regularly in typesetting "songs" where four lines of the same paragraph are set differently and last two lines are set differently but I don't want extra space between those paragraphs. I have often used another workaround: All my paragraph styles don't have space above or below. I have a special "Space" paragraph style, which I insert wherever needed. Not very elegant method, I agree... but is very useful in certain odd cases. Sorry, the first statement does only fill the space, that does shift tab too! But your second statement is completely wrong. A wrong workflow. Nobody should do it. Because when it comes to column, frame or page breaks you will find this extra paragraph as extra space. And when it comes to headlines which keep options will keep the next paragraph on the same column, frame or page, the next paragraph with text will break away. An empty paragraph should be avoided in any case. I entirely agree with both. Shift Tab and Flush space will do the same thing. The empty paragraph is a strict "No" in a structured workflow, but in the case that I describe it indeed is a boon. In my case these are poems and each will start on a new page so there is no question of free-flowing or uncontrolled break. It saves me from creating so many identical styles with only space above or below differences. Anyway, this is deviating from the main discussion. Hi apuCygnet/Willi. I just compared the two solutions (flush space and right indent tab, both followed by shift return) and they yield different results in terms of text reflow. The right indent tab matches the text reflow that a paragraph return would yield, but the flush space doesn't. I have another case where space before/space after is giving me trouble. Let's say I have two headings, h1 and h2. h1 can be followed by body text or by h2. Both h1 and h2 can also be preceded by body text. When h2 follows h1, the space after for h1 is good enough and h2 does not need any space before. However, when h2 follows body text, h2 needs space before. Instead of putting space before in h1 or h2, I can deal with it in body text and create a style for the last paragraph of body text before h1 or h2, and append space after to this body text style. However, this becomes quite complex when in addition to body text, h1 and h2 can also be preceded by tables, figures, and other objects. I would need to create a style with space after for each ot those. What this problem seems to call for is a generic entity that is only space, and that I could use whatever the preceding or following style is (text, table, figure, heading, other object). So the "space" paragraph style apuCygnet mentions is along the lines of this generic entity. THese methods are only workarounds. The best is to create all paragraph styles needed. What hindes you to create 2 styles for your heading?! Every workaround causes more work. Why are so many users avoiding to use and to create styles. There is no reason to do so! Interesting discussion. In terms of the OP question regarding how to highlight content within a paragraph, I guess it's a matter of style. I would use an em dash to separate related content that can stand by itself.Or if you need more dramatic contrast you can go with the suggestions in post 6. Regarding the paragraph style, I have used style for poems where sentences have a certain width and you cannot just apply a unique style for a whole paragraph. You just have to go line by line and separate them with a paragraph return. And to separate groups you use the space after in the last line. I'm sure there's other ways to achieve the same goal. In a poem I use left-justify, and if the line is longer, so that a part of the line has to be in the next line I use a forced return followed by a shift-tab. (In German in such a case a broken line has in the second line to adjust to the right edge. If a verse in a poem has always the same font, I use forced return (shift-return), one of the view cases where I apply it. If there is a change to italic, I use a different Para Style. If there is a regularity, I use the next style property.) If the last line contains the Author at the right edge or a Bible Verse has its reference at the end, even it is the same font style, I use a character style written as follow: RIGHT-TAB + EM-Space + NAME/REFERENCE. The character style is automatically applied via nested styles (or GREP is better but needs mire know how) and includes the EM-Space and the reference or name at the end. This ensures that there is a little bit more space between the reference and the text than a word space would normally do and if there is not enough room in the same line it is moved automatically to the next line. The spacing can also be changed to other fixed spaces and combination of them. I use in German only the N-dash with spaces before and after, because the Germany typography does not use very often the M-dash as English does. The N-dash (in German Halbgeviertstrich or 1/2-Geviert-Strich or Gedankenstrich or Spiegelstrich) is used as Bullet and as interruption in text. The M-dash (Geviertstrich or Streckenstrich) is only used for a distance between several Cities (like »München—Wien«) but also the N-dash can be used instead. The N-dash requires in German always a space before and after and should not appear at the beginning of a line (except as bullet), so I write it with a fixed space before (I replace via GREP the first space). As Willi said in his earlier post, it is very dangerous to let an empty paragraph flow in your text. Not good practice at all. I suggested it from my experience but that case was entirely different and I still believe that in my case that 'sin' is acceptable because without it I would have 18 para styles instead of 6... Anyway, what you describe here about h1 and h2 is also an interesting case and yes, I have encountered it also. Here the problem can be simply solved by two versions of h2: one without space above (when it occurs after h1) and one with space above for all other places. Your problem is when "space after" of first para and "space before" of the second para "add" in InDesign. In Ventura, in such cases the space added is the larger number of the two. The theory is to give minimum space which will satisfy the requirement of both the styles. Suppose a chapter-hd has a "space after" of 2 pica and sub-hd has 'space above' of 1 pica. When Body Text follows chapter-hd it starts exactly after 2 picas but if sub-hd follows it, it starts after 3 picas. We usually don't want that to happen, do we? If we followed Ventura's method, we would still get 2 picas because it satisfies requirement of both the styles and when sub-hd appeared elsewhere, we expect it to have 1 pica "above"... Any thoughts on this? I think 18 para styles are not much. It is not much work to use a para style system with a tree of depending on. So maintainance is minmum. I have prepaired a whole tree of predefined para styles. Which I use and this gives me the oportunity to use in every document para styles with the exactly same name which has advantages when it comes to book files and its synchroization and with exchange text via copy and paste. The char styles are defined as minimu, only when it comes to the font styles, I define both, font family and style because the naming of the styles is different in many fonts. So it gives me the oportunity to change style definitions via the search font dialog, which makes it really fast to adjust whole documents. Another advantage I see I can prepair a css which uses always the same elements, classes and id (for objects). What you have with space before and after is solved similar in FrameMaker. I am wondering why Adobe has not adapteed it for InDesign yet. I think this is a case of "six of one and half a dozen of the other". You'll need to make appropriate changes to your workflow to accomodate what you are trying to do. The software isn't designed the way you want it for a reason, because it doesn't make logical sense for a large portion of the users. It would drive most people barmy if this was "normal working conditions" of Indesign. You've been given more than enough workflow suggestions to accomplish what you need to do - it's time to review your process and find out which one will work best for you. For me, the right indent tab or a flush space isn't a great idea in my opinion, although it accomplishes what you need, it's far too easy to make an error this way. I think paragraph styles are the best option and applying them appropriately throughout your document will be the most accurate and efficient way to control your work. If styles are an issue and you need tips on how to utilise styles more efficiently I suggest you get this book I agree completely what you write. When I mentioned the shift-tab before, I statet that this is a workaround which I do not recommend. It is also important to look forward. When it comes to epubs and electronic publishing. Spaces without content are NOT found in the resulting epub file and forced returns (I don't like the term soft return, because it is anything else than soft) can be stripped out by command, which makes sense in the most cases. Even so, if you think today you might never export to epub, you might change your opinion and will have to change all your documents too.. Regarding sub-paragraphs within paragraphs, I must say I have not come across this in style guides. However, I often encounter books where I feel there is either overuse or underuse of paragraphs, in other words too many short paragraphs or paragraphs that are too long. In both cases, it dilutes the visual and semantic separation a paragraph provides the reader. e-pub and forward-looking approaches is an interesting statement. I guess if an empty space entity is the chosen path, a hidden or transparent character in the space entity might be accounted for in e-pub. Thanks. If you get that book I mentioned it will demonstrate how to make Master and Child styles so you only have to edit 1 master style to have the changes reflected in any child styles. Willi Adelberger wrote: ... What you have with space before and after is solved similar in FrameMaker. I am wondering why Adobe has not adapteed it for InDesign yet. The more requests for "inter-paragraph spacing determined by larger of space after/before" like FrameMaker and Ventura, the better the chance of achieving it. Anyone who wants the feature should consider filing a formal feature enhancement request here: Wishform. It probably should be an option, so that InDesign users who have become acccustomed to the sum of space after/before, can continue their accustomed mode. Like the single-line/paragraph composer, and keep-with settings, the reasons for some confusing composition behaviors won't be obvious without inspecting the paragraphs with the style designer. Perhaps it would be helpful to request a feature enhancement for indicators of line/paragraph composer, space after/before, and keep-with settings - something like the color highlights that indicate composition problems like tracking, etc. HTH Regards, Peter _______________________ Peter Gold KnowHow ProServices sPretzel wrote:. Another FrameMaker-like feature request that would be useful is paragraph-context-formatting. In Structured FrameMaker, which is really *GML - documents that are controlled by a Document Type Definition (DTD), something like an HTML style sheet, it's possible to specify that paragraphs take on properties based on context. For example, in a sequence of list paragraphs, it's possible to define that the first, last, notfirst, and/or notlast, paragraphs take on specified properties, like space before the first list paragraph, space after the last list paragraph, auto-numbering for the first paragraph restarts, and auto-numbering for subsequent paragraphs continues. With this method, only one such "intelligent" context-aware paragraph style is needed for a particular list; the paragraph's position in the list controls its behavior. If a paragraph within the list is moved to or from the first or last or not-first or not-last position, it adjusts according to the defined context rules. InDesign's XML feature can work with a DTD, but in its current state of development, such automatic context-aware formatting isn't available. Enter your votes here: Wishform. HTH Regards, Peter _______________________ Peter Gold KnowHow ProServices Eugene Tyson wrote: it will demonstrate how to make Master and Child styles so you only have to edit 1 master style to have the changes reflected in any child styles. Hi Eugene. I suppose you are referring to basing a style on another (parent-child relationship). I am familiar with that...and in the heading example I mentioned, if multiple styles were to be created for h2, all styles with changing space before/after for h2 would be based on a parent h2. Yes they would But that behaviour can be released by editing a child style altered in the spacing, thus breaking that portion of the relationship - but everything else, font, size, colour, etc would remain - just the link between spacing would be disassociated with the styles. I'd make a folder with Master Styles and inlcude all the headings in there. Then I'd make a Sub Heading of the H2 style called H2 child I'd then break the spacing in H2 child to be what is required. Then I'd generate sub headings of H2 child and call them H2 grandchildren. This way - H2 will have different spacing. H2 Child will have different spacing H2 GrandChild will have same spacing as H2 Child To adjust H2 spacing you can do this independently without affecting the child styles and to adjust the H2 Child spacing would adjust the H2 grandchildren So you really only need to Adjust 1 style to make an overall document change. I'd set Keyboard Shortcuts for H2 styles CTRL 2 for H2 CTRL ALT 2 for H2 Child CTRL ALT SHIFT 2 for H2 Grandchild Then CTRL 0 to go back to body style. You could set it up with "Apply Next Style" in the H2 to Apply H2 Child Then H2 Child to Apply H2 Grandchild And selecting the text from H2 to H2 Grandchild text you can apply all the paragraph styles in one click using the Apply Next Style Command.
http://forums.adobe.com/thread/1250735
CC-MAIN-2014-15
en
refinedweb
Hello, I have a pcolormesh() plot that I would like to update when a user presses a key. Basically there are data files from 0..N and I want to update the plot to the next frame when a user presses the 'right' arrow. I have the keypress event working, I just don't know how to 'update' the data for the pcolormesh. I'm pretty new to matplotlib, so if anyone could help with this it would be greatly appreciated! Thanks! (Code snippet below) def press(event): if event.key=='right': data2D = readDataFile() #updates data2D #update pcolormesh somehow?? fig.canvas.draw() #X,Y,data2D are initially set here... fig = plt.figure() fig.canvas.mpl_connect('key_press_event', press) ax = fig.add_subplot(111) plt.pcolormesh(X, Y, data2D) plt.colorbar() -- View this message in context: Sent from the matplotlib - users mailing list archive at Nabble.com.
http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/thread/31913409.post@talk.nabble.com/
CC-MAIN-2014-15
en
refinedweb
Ifpack_CrsIlut: ILUT preconditioner of a given Epetra_RowMatrix. More... #include <Ifpack_CrsIlut.h> Ifpack_CrsIlut: ILUT preconditioner of a given Epetra_RowMatrix..
http://trilinos.sandia.gov/packages/docs/r11.0/packages/ifpack/doc/html/classIfpack__CrsIlut.html
CC-MAIN-2014-15
en
refinedweb
Introduction to repaint Method in Java Repaint method in java is available in java.applet.Applet class and is a final method which is used whenever we want to call update method along with the call to paint method, call to update method clears the current window, performs update, and afterwards calls paint method. Syntax: package <packagename>; // class extending applet public class <classname> extends Applet{ public method <methodName>(<arguments>){ repaint(); // calling repaint method when required } } The above syntax shows how a repaint method is used in java. Repaint method is a part of java.applet.Applet class and it cannot be overridden. Therefore repaint method can be directly called from a class extending Applet or its subclasses. How repaint Works in Java? Repaint method is a final method available in Applet class and that’s why it cannot be overridden. Whenever repaint method is to be used it should be directly called from subclasses of Applet class. Repaint method is responsible for handling update to paint cycle of applet. Whenever we want a component to repaint itself we need to call repaint method. In case we have made changes to appearance of a component but have not made any changes to its size than we can call repaint method to update new appearance of component on the graphical user interface. Repaint method is an asynchronous method of applet class. When call to repaint method is made, it performs request to erase and perform redraw of the component after a small delay in time. Whenever repaint method is invoked from a component, a request is sent to the graphical user interface which communicates to the graphical user interface to perform some action at a future instance of time. The whole idea behind repaint method is to avoid call to paint () method directly. Examples to Implement repaint Method in Java Now we will see some java examples showing use of repaint method: Example #1 Here is an example showing how repaint method is used in java: Code: package com.edubca.repaintdemo; import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.util.*; import java.applet.Applet; // class extending applet component and implementing mouse event listener public class RepaintDemo extends Applet implements MouseListener { private Vector vector; public RepaintDemo() { vector = new Vector(); setBackground(Color.red); addMouseListener(this); } public void paint(Graphics graphics) { // paint method implementation super.paint(graphics); graphics.setColor(Color.black); Enumeration enumeration = vector.elements(); while(enumeration.hasMoreElements()) { Point p = (Point)(enumeration.nextElement()); graphics.drawRect(p.x-20, p.y-20, 40, 40); } } public void mousePressed(MouseEvent mouseevent) { vector.add(mouseevent.getPoint()); repaint(); // call repaint() method } public void mouseClicked(MouseEvent mouseevent) {} public void mouseEntered(MouseEvent mouseevent) {} public void mouseExited(MouseEvent mouseevent) {} public void mouseReleased(MouseEvent mouseevent) {} public static void main(String args[]) { JFrame frame = new JFrame(); // creating a jFrame object frame.getContentPane().add(new RepaintDemo()); // Adding Window frame.setTitle("Repaint Method Demo"); // set title of the window frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setLocationRelativeTo(null); frame.setSize(375, 250); // set size of window frame.setVisible(true); // set window as visible } } Output: After mouse click event is performed, the following shapes will be visible with black color border. Note that this is done through repaint method which calls update and then paint method, due to which we are able to see visible shapes immediately after click event is performed. Example #2 To provide more clarity about use of repaint method, here is another example: Code: import java.awt.*; import java.awt.event.*; import java.util.*; import java.applet.Applet; import java.awt.Graphics; // class extending public class RepaintDemo extends Applet { int test=2; public void paint(Graphics graphics) { super.paint(graphics); setBackground(Color.cyan); // set backgroung color of window graphics.setColor(Color.black); // set color of Text appearing on window graphics.drawString("Value of Variable test = "+test, 80, 80); try { Thread.sleep(1000); } catch(InterruptedException ex){} // increasing value of variable by 1 and update its value of GUI test++; repaint(); } } Output: In the above example we have an applet and a variable called test is declared inside it. We are continuously incrementing value of variable test and we want to ensure that updated value of variable is visible on user interface. Therefore we are making use of repaint method that ensures to call update method before paint method. Output of the above program. In this window value of test variable is always incrementing and updated value is visible on GUI. Conclusion The above example provides a clear understanding of repaint method and its function. We should call repaint method when we want the update and paint cycle of the applet to be invoked. Calling repaint method performs immediate update to look and appearance of a component. Recommended Articles This is a guide to repaint in Java. Here we discuss how repaint Method Works in Java and its examples along with code implementation. You can also go through our other suggested articles to learn more –
https://www.educba.com/repaint-in-java/?source=leftnav
CC-MAIN-2020-34
en
refinedweb
- Messaging Framework The first part of the SOAP specification is primarily concerned with defining how SOAP messages are structured and the rules processors must abide by when producing and consuming them. Let's look at a sample SOAP message, the inventory check request described in our earlier example: NOTE All the wire examples in this book have been obtained by using the tcpmon tool, which is included in the Axis distribution you can obtain with the example package from the Sams Web site. Tcpmon (short for TCP monitor) allows you to record the traffic to and from a particular TCP port, typically HTTP requests and responses. We'll go into detail about this utility in Chapter 5. POST /axis/InventoryCheck.jws HTTP/1.0 Content-Type: application/soap+xml; charset=utf-8 <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns: <soapenv:Body> <doCheck soapenv: <arg0 xsi:947-TI</arg0> <arg1 xsi:3</arg1> </doCheck> </soapenv:Body> </soapenv:Envelope> This is clearly an XML document (Chapter 2, "XML Primer," covered XML in detail), which has been sent via an HTTP POST. We've removed a few of the nonrelevant HTTP headers from the trace, but we left the content-type header, which indicates that this POST contains a SOAP message (note that this content-type would be different for SOAP 1.1see the sidebar for details). We'll cover the HTTP-specific parts of SOAP interactions further a bit later in the chapter. The root element is soapenv:Envelope, in the namespace, which surrounds a soapenv:Body containing application-specific content that represents the central purpose of the message. In this case we're asking for an inventory check, so the central purpose is the doCheck element. The Envelope element has a few useful namespace declarations on it, for the SOAP envelope namespace and the XML Schema data and instance namespaces. SOAP 1.1 Difference: Identifying SOAP Content The SOAP 1.1 envelope namespace is, whereas for SOAP 1.2 it has changed to. This namespace is used for defining the envelope elements and for versioning, which we will explain in more detail in the "Versioning in SOAP" section. The content-type used when sending SOAP messages across HTTP connections has changed as wellit was text/xml for SOAP 1.1 but is now application/soap+xml for SOAP 1.2. This is a great improvement, since text/xml is a generic indicator for any type of XML content. The content type was so generic that machines had to use the presence of a custom HTTP header called SOAPAction: to tell that XML traffic was, in fact, SOAP (see the section on the HTTP binding for more). Now the standard MIME infrastructure handles this for us. The doCheck element represents the remote procedure call to the inventory check service. We'll talk more about using SOAP for RPCs in a while; for now, notice that the name of the method we're invoking is the name of the element directly inside the soapenv:Body, and the arguments to the method (in this case, the SKU number and the quantity desired) are encoded inside the method element as arg0 and arg1. The real names for these parameters in Java are SKU and quantity; but due to the ad-hoc way we're calling this method, the client doesn't have any way of knowing that information, so it uses the generated names arg0 and arg1. The response to this message, which comes back across in the HTTP response, looks like this: Content-Type: application/soap+xml; charset=utf-8 <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns: <soapenv:Body> <doCheckResponse soapenv: <rpc:result xmlns:return</rpc:result> <return xsi:true</return> </doCheckResponse> </soapenv:Body> </soapenv:Envelope> The response is also a SOAP envelope, and it contains an encoded representation of the result of the RPC call (in this case, the Boolean value true). What good is having this envelope structure, when we could send our XML formats directly over a transport like HTTP without a wrapper? Good question; as we answer it, we'll examine some more details of the protocol. Vertical Extensibility Let's say you want your purchase order to be extensible. Perhaps you want to include security in the document someday, or you might want to enable a notarization service to associate a token with a particular purchase order, as a third-party guarantee that the PO was sent and contained particular items. How might you make that happen? You could drop extensibility elements directly into your document before sending it. If we took the purchase order from the last chapter and added a notary token, it might look something like this: <po id="43871" submitted="2004-01-05" customerId="73852"> <notary:token xmlns: XQ34Z-4G5 </notary:token> <billTo> <company>The Skateboard Warehouse</company> ... </billTo> ... </po> To do things this way, and make it easy for your partners to use, you'd need to do two things. First, your schema would have to be explicitly extensible at any point in the structure where you might want to add functionality later (this can be accomplished in a number of ways, including the xsd:any/ schema construct); otherwise, documents containing extension elements wouldn't validate. Second, you would need to agree on rules by which those extensibility elements were to be processedwhich ones are optional, which ones affect which parts of the document, and so on. Both of these requirements present challenges. Not all schemas have been designed for extensibility, and you may need to extend a document that follows a preexisting standard format that wasn't built that way. Also, processing rules might vary from document type to document type, so it would be challenging to have a uniform model with which to build a common processor. It would be nice to have a standardized framework for implementing arbitrary extensibility in a way that everyone could agree on. It turns out that the SOAP envelope, in addition to containing a body (which must always be present), may also contain an optional Header elementand the SOAP Header structure gives us just what we want in an XML extensibility system. It's a convenient and well-defined place in which to put our extensibility elements. Headers are just XML elements that live inside the soapenv:Header/soapenv:Header tags in the envelope. The soapenv:Header always appears, incidentally, before the soapenv:Body if it's present. (Note that in the SOAP 1.2 spec, the extensibility elements are known as header blocks. However, the industryand the rest of this bookcolloquially refers to them simply as headers.) Let's look at the extensibility example recast as a SOAP message with a header: <soapenv:Envelope xmlns: <soapenv:Header> <notary:token xmlns: XQ34Z-4G5 </notary:token> </soapenv:Header> <soapenv:Body> <PO> ...normal purchase order here... </PO> </soapenv:Body> </soapenv:Envelope> Since the SOAP envelope wraps around whatever XML content you want to send in the body (the PO, in this example), you can use the Header to insert extensions (the notary:token header) without modifying the central core of the message. This can be compared to a situation in real life where you want to send a document and some auxiliary information, but you don't want to mark up the documentso you put the document inside an envelope and then add another piece of paper or two describing your extra information. Each individual header represents one piece of extensibility information that travels with your message. A lot of other protocols have this same basic conceptwe're all familiar with the email model of headers and body. HTTP also contains headers, and both email and HTTP use the concept of extensible, user-defined headers. However, the headers in protocols like these are simple strings; since SOAP uses XML, you can encode much richer data structures for individual headers. Also, you can use XML's structure to make processing headers much more powerful and flexible than a basic string-based model. Headers can contain any sort of data imaginable, but typically they're used for two purposes: Extending the messaging infrastructureInfrastructure headers are typically processed by middleware. The application doesn't see the headers, just their effects. They could be things like security credentials, correlation IDs for reliable messaging, transaction context identifiers, routing controls, or anything else that provides services to the application. Defining orthogonal dataThe second category of headers is application defined. These contain data that is orthogonal to the body of the message but is still destined for the application on the receiving side. An example might be extra data to accompany nonextensible schemasif you wanted to add more customer data fields but couldn't change the billTo element, for instance. Using headers to add functionality to messages is known as vertical extensibility, because the headers build on top of the message. A little later we'll discuss horizontal extensibility as well. Now that you know the basics, we'll consider some of the additional framework that SOAP supplies for headers and how to use it. After that, we'll explain the SOAP processing model, which is the key to SOAP's scalability and expressive power. The mustUnderstand Flag Some extensions might use headers to carry data that's nice to know but not critical to the main purpose of the SOAP message. For instance, you might be invoking a "buy book" operation on a store's Web service. You receive a header in the response confirmation message that contains a list of other books the site thinks you might find interesting. If you know how to process that extension, then you might offer a UI to access those books. But if you don't, it doesn't matteryour original request was still processed successfully. On the other hand, suppose the request message of that same "buy book" operation contained private information (such as a credit card number). The sender might want to encrypt the XML in the SOAP body to prevent snooping. To make sure the other side knows what to do with the postencryption data inside the body, the sender inserts a header that describes how to decrypt the message. That header is important, and anyone trying to process the message without correctly processing the header and decrypting the body is going to run into trouble. This is why we have the mustUnderstand g attribute, which is always in the SOAP envelope namespace. Here's what our notary header would look like with that attribute: <notary:token xmlns: XQ34Z-4G5 </notary:token> By marking things mustUnderstand (when we refer to headers "marked mustUnderstand," we mean having the soapenv:mustUnderstand attribute set to true), you're saying that the receiver must agree to all the terms of your extension specification or they can't process the message. If the mustUnderstand attribute is set to false or is missing, the header is defined as optionalin this case, processors not familiar with the extension can still safely process the message and ignore the optional header. SOAP 1.1 Difference: mustUnderstand In SOAP 1.2, the mustUnderstand attribute may have the values 0/false (false) or 1/true (true). In SOAP 1.1, despite the fact that XML allows true and false for Boolean values, the only legal mustUnderstand values are 0 and 1. The mustUnderstand attribute is a key part of the SOAP processing model, since it allows you to build extensions that fundamentally change how a given message is processed in a way that is guaranteed to be interoperable. Interoperable here means that you can always know how to gracefully fail in the face of extensions that aren't understood. SOAP Modules When you implement a semantic using SOAP headers, you typically want other parties to use your extension, unless it's purely for internal use. As such, you typically write a specification that details all the constraints, rules, preconditions, and data formats of your extension. These specifications are known as SOAP modules. Modules are named with URIs so they can be referenced, versioned, and reasoned about. We'll talk more about module specifications when we get to the SOAP binding framework a bit later.
https://www.informit.com/articles/article.aspx?p=327825&seqNum=5
CC-MAIN-2020-34
en
refinedweb
Introduction to Constructor and Destructor in Java The following article Constructor and Destructor in Java provides a detailed outline for the creation of constructor and destructor in Java. Every Programming language has this concept called constructor and destructor. Java is an object-oriented programming language. If you know the object-oriented concepts then it will be beneficial to you to understand it more clearly. A constructor is something that initializes objects and destructors are to destroy that initialization. Java has automatic garbage collection which used mark and sweep’s algorithm. What is Constructor and Destructor in Java? A constructor is used to initialize a variable that means it allocates memory for the same A constructor is nothing but automatic initialization of the object. Whenever the program creates an object at that time constructor is gets called automatically. You don’t need to call this method explicitly. Destructor is used to free that memory allocated while initialization. Generally, in java, we don’t need to call the destructor explicitly. Java has a feature of automatic garbage collection. Why do we Need Constructor and Destructor in Java? Constructor and destructor mostly used to handle memory allocation and de-allocation efficiently. Constructor and destructor do a very important role in any programming language of initializing and destroying it after use to free up the memory space. How Constructor and Destructor Works in Java A constructor is just a method in java. Which has the same name as the class name. The constructor method does not have any return type to it. Look at the following example for more clarity: class Employee { Employee() { } } If you see in the above example we have not given any return type like int or void to the method which has the same name as a class name. It is mainly used to initialize the object. When we are creating an object of a class at that time constructor get invoked. It will be more clear with the following code snippet. How to create Constructors and Destructors in java? Look at the following example class Employee { Employee() { //This is constructor. It has same name as class name. System.out.println(“This is the default constructor”); } } Types of Constructor There are two types of constructors depending upon the type we can add and remove variables. - Default Constructor - Parameterized Constructor With this, we are also going to see constructor overloading. 1. Default Constructor This is the one type of constructor. By default without any parameters, this constructor takes place. This constructor does not have any parameters in it. Example: Class Abc{ Abc(){ System.out.println(“This is the example of default constructor.”); } } 2. Parameterized Constructor As the name suggest parameterized constructor has some parameters or arguments at the time of initializing the object. Example: class Square{ int width,height; Square( int a , int b){ width = a; height = b; } int area(){ return width * height; } } class Cal{ public static void main(String[] args){ { Square s1 = new Square(10,20); int area_of_sqaure = s1.area(); System.out.println("The area of square is:" + area_of_sqaure); } } } Output: java Cal The area of the square is 200 Now, it is time to talk about constructor overloading in java. This means that having multiple constructors with different parameters. So with this, each constructor can do different tasks. Sometimes as per the requirement, we need to initialize constructors in different ways. Example public class Abc{ String name; int quantity; int price; Abc( String n1, int q1, int p1){ name = n1; quantity = q1; price = p1; } Abc( String n2, int p2){ name = n2; price = p2; quantity = price/10; } void display(){ System.out.println("Product Name"+ name); System.out.println("Product quantity is"+ quantity); System.out.println("Product price is:"+ price); } public static void main(String[] args){ Abc product1; product1 = new Abc("Dates",500,50); product1.display(); product1 = new Abc("cashu",800); product1.display(); } } Output: Product Name Dates Product quantity is 500 Product price is 50 Product Name cashu Product quantity is 80 Product price is 800 Try out the above program and you will be clear what exactly happening with constructor overloading. Destructor Before start talking about destructor let me tell you there is no destructor in java. Destructor is in C++ programming language. If we are talking about java then java has a feature called automatic garbage collector. Which free the dynamically allocated memory when there is no use. This concept is very important and you can explore more about this garbage collection in java. - Java uses the garb collection technique for memory allocation automatically. - There is no need to explicit use of destructors like C++. - For allocating memory in java we do not have malloc function like in C programming. - The same process of allocating memory is done by the new operator in java. - new keyword allocates memory space for an object on heap memory. - At the time of program execution, a new keyword allocates some memory space for the object. End-user need to worry about this as memory allocation is handled by the program. At the time when the object used in programs done with the work the memory used for the object is utilized for another task. This process of utilizing memory efficiently is the job of garbage collection in java. Let’s talk about destructor then. As we know there is no destructor in java as it has finalize() method to do so. The following are some of the key points to be noted. Finalize() Methods - Finalize method is work like destructor and opposite of constructor as we have seen earlier. - Generally, the finalize method is used to remove the object. - For using this method we have to explicitly define this method in java. - The finalize method starts working after garbage collection done with its work. - This simply means that after freeing memory space by deallocating memory space from objects there are chances that memory utilization still there with other things like fonts etc. to delete that memory space or to frees up that space we make use of finalize() method. Conclusion Constructor and destructor(garbage collection in java) are very important things to get clear in any programming language as this is the start where you can actually get how things are done at the background to manage memory space. Recommended Articles This is a guide to Constructor and Destructor in Java. Here we discuss the introduction to Constructor and Destructor, Why do we need it and how does it work in java along with an example. You may also look at the following articles to learn more –
https://www.educba.com/constructor-and-destructor-in-java/?source=leftnav
CC-MAIN-2020-34
en
refinedweb
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) The type FAT_VI describes the FAT sector caching structure. The structure is defined in the file File_Config.h as follows: typedef struct fcache { U32 sect; /* Cached FAT sector number */ U8* buf; /* FAT sector cache buffer */ BIT dirty; /* FAT table content modified */ } FCACHE; Example: #include <file_config.h> ... FCACHE fatCh.
https://www.keil.com/support/man/docs/rlarm/rlarm_lib_fcache.htm
CC-MAIN-2020-34
en
refinedweb
AppointmentMappingInfo Class Contains mappings of the appointment properties to the appropriate data fields. Namespace: DevExpress.XtraScheduler Assembly: DevExpress.XtraScheduler.v20.1.Core.dll Declaration public class AppointmentMappingInfo : MappingInfoBase<Appointment> Public Class AppointmentMappingInfo Inherits MappingInfoBase(Of Appointment) Remarks The AppointmentMappingInfo class contains a set of properties whose names are similar to the persistent properties declared within the Appointment interface. If the appointments storage object (AppointmentStorage) is bound to a data source via its PersistentObjectStorage<T>.DataSource property, the properties of the AppointmentMappingInfo class allow the corresponding persistent properties of appointments to be bound to the appropriate fields in the data source. An object of the AppointmentMappingInfo type can be accessed via the appointments storage's AppointmentStorageBase.Mappings property.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.AppointmentMappingInfo
CC-MAIN-2020-34
en
refinedweb
Container resources’ request, limit & usage metrics Kubernetes(k8s) is orchestrating the world now, not just containers. The growth in adoption of Kubernetes and the contributions towards this open-source system is massive and pull-requests’ landing in GitHub are from all over the world. In such automated and fast-paced world, understanding the metrics and monitoring is essential to solve the problems. Often this comes handy with Prometheus and lots of official and un-official exporters which pulls the metrics data and serve to Prometheus. The scaling-on-demand for services and cluster nodes is another massive win for k8s by eliminating the late-night work for installing a new machine and provisioning it during the heavy load time. ( May be “Thanks-Giving” and “Christmas” ?) But often these advantages provided by k8s are not well utilized. In some scenarios the CPU and Memory resource are over utilized by some running services and makes it unavailable for other services getting provisioned in the cluster, which in-turn triggers the scaling of the cluster. This incurs cost to the management without the actual need for it. To avoid/monitor situations like this, there are few boundaries like quota restriction, but this will restrict deploying any additional services altogether once the quota is reached. So, it would be great to have the resources like CPU and Memory monitored and triggered a proactive alert when a certain level of threshold is breached (say 80%). There are various tools available for this process. A famous tool is cAdvisor from Google. But most of the time the detailed information is not needed and the needed information will be missed. The Container Resource Exporter(CRE) is to solve this simple problem with a simplistic solution in an elegant way. The CRE is specifically used to capture the containers’ resource Request quantity, Limit Quantity and Current usage in real-time. Of course the current usage is the real-time metric. The request and limit are specified as pre-defined resource object as part of the deployment YAML. CRE makes use of the Metrics API from k8s server and queries the current usage metrics for CPU and Memory for every container. This can run in two different scopes viz., local and cluster scope. The local scope is done by setting the namespace to be watched in environment variable via downward API in the deployment. The later one will scrape all the containers across the clusters. Along with the resource its status as Running, Unknown, Pending or Error is also exported. Not just the resource statistics, the total number of pods are also scraped and exported. If running in cluster scope, total pod count per namespace is exported. These scraped metrics are printed in the Prometheus metric text format with the labels that best describes the container for easy integration with Prometheus Alert Manager and Grafana for visualization. Sample exported metrics # HELP cpu_limit CPU Limit by deployment # TYPE cpu_limit gauge cpu_limit{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 1 cpu_limit{container_name="alertmanager-configmap-reload",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 1 # HELP cpu_request Requested CPU by deployment # TYPE cpu_request gauge cpu_request{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 0.001 cpu_request{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-nlqqc",status="Running "} 0.001 # HELP current_cpu_usage Current CPU Usage as reported by Metrics API # TYPE current_cpu_usage gauge current_cpu_usage{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9"} 0 current_cpu_usage{container_name="alertmanager-configmap-reload",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9"} 0 # HELP current_memory_usage Current CPU Usage as reported by Metrics API # TYPE current_memory_usage gauge current_memory_usage{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9"} 1.4168064e+07 current_memory_usage{container_name="alertmanager-configmap-reload",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9"} 1.363968e+06 # HELP memory_limit Memory Limit by deployment # TYPE memory_limit gauge memory_limit{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 5.36870912e+08 memory_limit{container_name="alertmanager-configmap-reload",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 1.073741824e+09 # HELP memory_request Requested Memory by deployment # TYPE memory_request gauge memory_request{container_name="alertmanager",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 2.68435456e+08 memory_request{container_name="alertmanager-configmap-reload",namespace="default",pod_name="prometheus-alertmanager-74bd9d5867-gmlj9",status="Running "} 5.36870912e+08 # HELP total_pod Total pod count in given space # TYPE total_pod counter total_pod{namespace="default"} 1 Visualization The below graph is the visualization of the changes in number of pods over the time, followed by the total CPU and Memory Request/Limit/Usage graph overlay on the local scope (namespace level) and a CPU & Memory req/limit/usage overlay of a sample pod. Sample AlertManager Rule alert: CPU Over usage alert expr: sum(current_cpu_usage) > 5 for: 5m labels: severity: critical annotations: description: 'Current value: {{ $value }}' summary: CPU usage is high consistently. This can cause extra load on the system. This alert rule setup in the AlertManager will trigger an alert to the default receiver when ever the CPU utilization is more than 5 for consistent 5 minutes of period. Gotchas’ To run CRE, it needs a Service Account which has a minimal “list” action on the “pods” resources over the “Core API” and “Metrics API”. This can be provisioned with Role and RoleBindings for local scope and ClusterRole and ClusterRoleBindings for cluster scope. All those are taken care in the Helm chart for easy installation on the desired scope here. PR’s welcomed.
https://medium.com/@github.gkarthiks/container-resources-request-limit-usage-metrics-5ad2b5e822b5?sk=24110479a0e08a7cd99b3c18ba22a74c
CC-MAIN-2020-34
en
refinedweb
EntryComponent of Angular - Stayed Informed - What Are Components in Angular 5,4 and 2? The entry component is used to define components and created dynamically using the ComponentFactoryResolver. Firstly, Angular creates a component factory for each of the bootstrap components with the help of ComponentFactoryResolver. And then, at run-time, it will use the factories to instantiate the components. You specify an entry component by bootstrapping in the Angular module or you specify an entry component by routing definition. All other root components should be listed in the declarations array. const routes: Routes = [ { path: '', redirectTo: 'home', pathMatch: 'full'}, { path: 'login', component: LoginComponent }, { path: 'dashboard', component: DasboardComponent }, { path: '**', redirectTo: 'home' } ]; There are two main kinds of entry components which are following - 1. The bootstrapped root component 2. A component you specify in a route The bootstrapped entry component - A bootstrapped component is an entry component that Angular loads into DOM at the application launch and the other root components loaded dynamically into entry components. The angular loads a root dynamically because it is bootstrapped in the Angular Module. In the below example, AppComponent is a root component so that angular loads dynamically. Example – import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { LoginComponent } from './login/login.component'; @NgModule({ declarations: [ AppComponent, LoginComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] // bootstrapped entry component }) export class AppModule { } A Routed entry component - All router components must be entry components because the component would require you to add in two places. 1. Router and 2. EntryComponents The Angular compiler is so smarter and it is recognizing that this is a router component and it automatically added router components into entry components. The route definition refers to components by its type i.e. 1. LoginComponent 2. DasboardComponent There are two components one is Login and another one is Dashboard. These components have the ability to navigate between the login and dashboard views if passed the authentication and authorization of this app. Example - const routes: Routes = [ { path: '', redirectTo: 'home', pathMatch: 'full'}, { path: 'login', component: LoginComponent }, { path: 'dashboard ', component: DasboardComponent }, { path: '**', redirectTo: 'home' } ]; Stayed Informed - Why doesAngular need entryComponents? I hope you enjoyed this post. So please write your thoughts in the below comment box. Thank you so much for reading this post.
https://www.code-sample.com/2018/04/angular-6-5-4-entry-component.html
CC-MAIN-2020-34
en
refinedweb
import "v.io/v23/rpc" Package rpc defines interfaces for communication via remote procedure call. Concept: Tutorial: (forthcoming) There are two actors in the system, clients and servers. Clients invoke methods on Servers, using the StartCall method provided by the Client interface. Servers implement methods on named objects. The named object is found using a Dispatcher, and the method is invoked using an Invoker. Instances of the Runtime host Clients and Servers, such instances may simultaneously host both Clients and Servers. The Runtime allows multiple names to be simultaneously supported via the Dispatcher interface. The naming package provides a rendezvous mechanism for Clients and Servers. In particular, it allows Runtimes hosting Servers to share Endpoints with Clients that enables communication between them. Endpoints encode sufficient addressing information to enable communication. nolint:golint doc.go init.go model.go reflect_invoker.go rpc.vdl.go universal_service_methods.go util.go TODO(toddw): Rename GlobMethod to ReservedGlob. func PublisherNames(e []PublisherEntry) []string PublisherNames returns the current set of names being published by the publisher. These names are not rooted at the mounttable. func PublisherServers(e []PublisherEntry) []string PublisherServers returns the current set of server addresses being published by the publisher. TypeCheckMethods type checks each method in obj, and returns a map from method name to the type check result. Nil errors indicate the method is invocable by the Invoker returned by ReflectInvoker(obj). Non-nil errors contain details of the type mismatch - any error with the "Aborted" id will cause a panic in a ReflectInvoker() call. This is useful for debugging why a particular method isn't available via ReflectInvoker. type AddressChooser interface { ChooseAddresses(protocol string, candidates []net.Addr) ([]net.Addr, error) } AddressChooser determines the preferred addresses to publish into the mount table when one is not otherwise specified. AddressChooserFunc is a convenience for implementations that wish to supply a function literal implementation of AddressChooser. func (f AddressChooserFunc) ChooseAddresses(protocol string, candidates []net.Addr) ([]net.Addr, error) type AllGlobber interface { // Glob__ returns a GlobReply for the objects that match the given // glob pattern in the namespace below the receiver object. All the // names returned are relative to the receiver. Glob__(ctx *context.T, call GlobServerCall, g *glob.Glob) error } AllGlobber is a powerful interface that allows the object to enumerate the the entire namespace below the receiver object. Every object that implements it must be able to handle glob requests that could match any object below itself. E.g. "a/b".Glob__("*/*"), "a/b".Glob__("c/..."), etc. ArgDesc describes an argument; it is similar to signature.Arg, without the information that can be obtained via reflection. CallOpt is the interface for all Call options. type ChildrenGlobber interface { // GlobChildren__ returns a GlobChildrenReply for the receiver's // immediate children that match the glob pattern element. // It should return an error if the receiver doesn't exist. GlobChildren__(ctx *context.T, call GlobChildrenServerCall, matcher *glob.Element) error } ChildrenGlobber allows the object to enumerate the namespace immediately below the receiver object. type Client interface { // StartCall starts an asynchronous call of the method on the server instance // identified by name, with the given input args (of any arity). The returned // Call object manages streaming args and results, and finishes the call. // // StartCall accepts at least the following options: // v.io/v23/options.ChannelTimeout, v.io/v23/options.NoRetry. StartCall(ctx *context.T, name, method string, args []interface{}, opts ...CallOpt) (ClientCall, error) // Call makes a synchronous call that will retry application level // verrors that have verror.ActionCode RetryBackoff. Call(ctx *context.T, name, method string, inArgs, outArgs []interface{}, opts ...CallOpt) error // PinConnection returns a flow.PinnedConn to the remote end if it is successful // connecting to it within the context’s timeout, or if the connection is // already in the cache. // Connection related opts passed to PinConnection are valid until the // PinnedConn.Unpin() is called. PinConnection(ctx *context.T, name string, opts ...CallOpt) (flow.PinnedConn, error) // Close discards all state associated with this Client. In-flight calls may // be terminated with an error. // TODO(mattr): This method is deprecated with the new RPC system. Close() // Closed returns a channel that will be closed after the client is shut down. Closed() <-chan struct{} } Client represents the interface for making RPC calls. There may be multiple outstanding Calls associated with a single Client, and a Client may be used by multiple goroutines concurrently. type ClientCall interface { Stream // CloseSend indicates to the server that no more items will be sent; server // Recv calls will receive io.EOF after all items are sent. Subsequent calls to // Send on the client will fail. This is an optional call - it is used by // streaming clients that need the server to receive the io.EOF terminator. CloseSend() error // Finish blocks until the server has finished the call, and fills resultptrs // with the positional output results (of any arity). Finish(resultptrs ...interface{}) error // RemoteBlesesings returns the blessings that the server provided to authenticate // with the client. // // It returns both the string blessings and a handle to the object that contains // cryptographic proof of the validity of those blessings. // // TODO(ashankar): Make this RemoteBlessingNames and remove the second result // since that is the same as ClientCall.Security().RemoteBlessings() RemoteBlessings() ([]string, security.Blessings) // Security returns the security-related state associated with the call. Security() security.Call } Call defines the interface for each in-flight call on the client. Finish must be called to finish the call; all other methods are optional. ClientOpt is the interface for all Client options. type Describer interface { // Describe the underlying object. The implementation must be idempotent // across different instances of the same underlying type; the ReflectInvoker // calls this once per type and caches the results. Describe__() []InterfaceDesc } Describer may be implemented by an underlying object served by the ReflectInvoker, in order to describe the interfaces that the object implements. This describes all data in signature.Interface that the ReflectInvoker cannot obtain through reflection; basically everything except the method names and types. Note that a single object may implement multiple interfaces; to describe such an object, simply return more than one elem in the returned list. type Dispatcher interface { // Lookup returns the service implementation for the object identified // by the given suffix. // // Reflection is used to match requests to the service object's method // set. As a special-case, if the object returned by Lookup implements // the Invoker interface, the Invoker is used to invoke methods // directly, without reflection. // // Returning a nil object indicates that this Dispatcher does not // support the requested suffix. // // An Authorizer is also returned to allow control over authorization // checks. Returning a nil Authorizer indicates the default // authorization checks should be used. // // Returning any non-nil error indicates the dispatch lookup has failed. // The error will be delivered back to the client. // // Lookup may be called concurrently by the underlying RPC system, and // hence must be thread-safe. Lookup(ctx *context.T, suffix string) (interface{}, security.Authorizer, error) } Dispatcher defines the interface that a server must implement to handle method invocations on named objects. EmbedDesc describes an embedded interface; it is similar to signature.Embed, without the information that can be obtained via reflection. type GlobChildrenServerCall interface { SendStream() interface { Send(reply naming.GlobChildrenReply) error } ServerCall } GlobChildrenServerCall defines the in-flight context for a GlobChildren__ call, including the method to stream the results. type GlobServerCall interface { SendStream() interface { Send(reply naming.GlobReply) error } ServerCall } GlobServerCall defines the in-flight context for a Glob__ call, including the method to stream the results. type GlobState struct { AllGlobber AllGlobber ChildrenGlobber ChildrenGlobber } GlobState indicates which Glob interface the object implements. NewGlobState returns the GlobState corresponding to obj. Returns nil if obj doesn't implement AllGlobber or ChildrenGlobber. type Globber interface { // Globber returns a GlobState with references to the interface that the // object implements. Only one implementation is needed to participate // in the namespace. Globber() *GlobState } Globber allows objects to take part in the namespace. Service objects may choose to implement either the AllGlobber interface, or the ChildrenGlobber interface. The AllGlobber interface lets the object handle complex glob requests for the entire namespace below the receiver object, i.e. "a/b".Glob__("...") must return the name of all the objects under "a/b". The ChildrenGlobber interface is simpler. Each object only has to return a list of the objects immediately below itself in the namespace graph. type Granter interface { Grant(ctx *context.T, call security.Call) (security.Blessings, error) CallOpt } Granter is a ClientCallOpt that is used to provide blessings to the server when making an RPC. It gets passed a context.T with parameters of the RPC call set on it. type InterfaceDesc struct { Name string PkgPath string Doc string Embeds []EmbedDesc Methods []MethodDesc } InterfaceDesc describes an interface; it is similar to signature.Interface, without the information that can be obtained via reflection. type Invoker interface { // Prepare is the first stage of method invocation, based on the given method // name. The given numArgs specifies the number of input arguments sent by // the client, which may be used to support method overloading or generic // processing. // // Returns argptrs which will be filled in by the caller; e.g. the server // framework calls Prepare, and decodes the input arguments sent by the client // into argptrs. // // If the Invoker has access to the underlying Go values, it should return // argptrs containing pointers to the Go values that will receive the // arguments. This is the typical case, e.g. the ReflectInvoker. // // If the Invoker doesn't have access to the underlying Go values, but knows // the expected types, it should return argptrs containing *vdl.Value objects // initialized to each expected type. For purely generic decoding each // *vdl.Value may be initialized to vdl.AnyType. // // The returned method tags provide additional information associated with the // method. E.g. the security system uses tags to configure AccessLists. The tags // are typically configured in the VDL specification of the method. Prepare(ctx *context.T, method string, numArgs int) (argptrs []interface{}, tags []*vdl.Value, _ error) // Invoke is the second stage of method invocation. It is passed the // in-flight context and call, the method name, and the argptrs returned by // Prepare, filled in with decoded arguments. It returns the results from the // invocation, and any errors in invoking the method. // // Note that argptrs is a slice of pointers to the argument objects; each // pointer must be dereferenced to obtain the actual argument value. Invoke(ctx *context.T, call StreamServerCall, method string, argptrs []interface{}) (results []interface{}, _ error) // Signature corresponds to the reserved __Signature method; it returns the // signatures of the interfaces the underlying object implements. Signature(ctx *context.T, call ServerCall) ([]signature.Interface, error) // MethodSignature corresponds to the reserved __MethodSignature method; it // returns the signature of the given method. MethodSignature(ctx *context.T, call ServerCall, method string) (signature.Method, error) // Globber allows objects to take part in the namespace. Globber } Invoker defines the interface used by the server for invoking methods on named objects. Typically ReflectInvoker(object) is used, which makes all exported methods on the given object invocable. Advanced users may implement this interface themselves for finer-grained control. E.g. an RPC gateway that enables bindings for other languages (like javascript) may use this interface to support serving methods without an explicit intermediate object. ChildrenGlobberInvoker returns an Invoker for an object that implements the ChildrenGlobber interface, and nothing else. ReflectInvoker returns an Invoker implementation that uses reflection to make each compatible exported method in obj available. E.g.: type impl struct{} func (impl) NonStreaming(ctx *context.T, call rpc.ServerCall, ...) (...) func (impl) Streaming(ctx *context.T, call *MyCall, ...) (...) The first in-arg must be context.T. The second in-arg must be a call; for non-streaming methods it must be rpc.ServerCall, and for streaming methods it must be a pointer to a struct that implements rpc.StreamServerCall, and also adds typesafe streaming wrappers. Here's an example that streams int32 from client to server, and string from server to client: type MyCall struct { rpc.StreamServerCall } // Init initializes MyCall via rpc.StreamServerCall. func (*MyCall) Init(rpc.StreamServerCall) {...} // RecvStream returns the receiver side of the server stream. func (*MyCall) RecvStream() interface { Advance() bool Value() int32 Err() error } {...} // SendStream returns the sender side of the server stream. func (*MyCall) SendStream() interface { Send(item string) error } {...} We require the streaming call arg to have this structure so that we can capture the streaming in and out arg types via reflection. We require it to be a concrete type with an Init func so that we can create new instances, also via reflection. As a temporary special-case, we also allow generic streaming methods: func (impl) Generic(ctx *context.T, call rpc.StreamServerCall, ...) (...) The problem with allowing this form is that via reflection we can no longer determine whether the server performs streaming, or what the streaming in and out types are. TODO(toddw): Remove this special-case. The ReflectInvoker silently ignores unexported methods, and exported methods whose first argument doesn't implement rpc.ServerCall. All other methods must follow the above rules; bad method types cause an error to be returned. If obj implements the Describer interface, we'll use it to describe portions of the object signature that cannot be retrieved via reflection; e.g. method tags, documentation, variable names, etc. ReflectInvokerOrDie is the same as ReflectInvoker, but panics on all errors. ListenAddrs is the set of protocol, address pairs to listen on. An anonymous struct is used to more easily initialize a ListenSpec from a different package. For TCP, the address must be in <ip>:<port> format. The <ip> may be omitted, but the <port> cannot. Use port 0 to have the system allocate one for you. type ListenSpec struct { // The addresses to listen on. Addrs ListenAddrs // The name of a proxy to be used to proxy connections to this listener. Proxy string // The address chooser to use for determining preferred publishing // addresses. AddressChooser } ListenSpec specifies the information required to create a set of listening network endpoints for a server and, optionally, the name of a proxy to use in conjunction with that listener. func (l ListenSpec) Copy() ListenSpec Copy clones a ListenSpec. The cloned spec has its own copy of the array of addresses to listen on. func (l ListenSpec) String() string type MethodDesc struct { Name string Doc string InArgs []ArgDesc // Input arguments OutArgs []ArgDesc // Output arguments InStream ArgDesc // Input stream (client to server) OutStream ArgDesc // Output stream (server to client) Tags []*vdl.Value // Method tags } MethodDesc describes an interface method; it is similar to signature.Method, without the information that can be obtained via reflection. type PublisherEntry struct { // The Name and Server 'address' of this mount table request. Name, Server string // LastMount records the time of the last attempted mount request. LastMount time.Time // LastMountErr records any error reported by the last attempted mount. LastMountErr error // TTL is the TTL supplied for the last mount request. TTL time.Duration // LastUnmount records the time of the last attempted unmount request. LastUnmount time.Time // LastUnmountErr records any error reported by the last attempted unmount. LastUnmountErr error // LastState is the last known publisher state of the entry. LastState PublisherState // DesiredState is the current desired state of the entry. // This will be either PublisherMounted or PublisherUnmounted. DesiredState PublisherState } PublisherEntry contains the status of a given mount operation. func (e PublisherEntry) String() string PublisherState indicates the state of a PublisherEntry. const ( // PublisherUnmounted indicates that the PublisherEntry is not mounted. PublisherUnmounted PublisherState = iota // PublisherMounting indicates that the PublisherEntry is in the process of mounting. PublisherMounting // PublisherMounted indicates that the PublisherEntry is mounted. PublisherMounted // PublisherUnmounting indicates that the PublisherEntry is in the process of unmounting. PublisherUnmounting ) func (s PublisherState) String() string String returns a string representation of the PublisherState. type Request struct { // Suffix of the name used to identify the object hosting the service. Suffix string // Method to invoke on the service. Method string // NumPosArgs is the number of positional arguments, which follow this message // (and any blessings) on the request stream. NumPosArgs uint64 // EndStreamArgs is true iff no more streaming arguments will be sent. No // more data will be sent on the request stream. // // NOTE(bprosnitz): We can support multiple stream values per request (+response) header // efficiently by adding a NumExtraStreamArgs (+NumExtraStreamResults to response) field // that is the uint64 (number of stream args to send) - 1. The request is then zero when // exactly one streaming arg is sent. Since the request and response headers are small, // this is only likely necessary for frequently streaming small values. // See implementation in CL: 3913 EndStreamArgs bool // Deadline after which the request should be cancelled. This is a hint to // the server, to avoid wasted work. Deadline vdltime.Deadline // GrantedBlessings are blessings bound to the principal running the server, // provided by the client. GrantedBlessings security.Blessings // TraceRequest maintains the vtrace context between clients and servers // and specifies additional parameters that control how tracing behaves. TraceRequest vtrace.Request // Language indicates the language of the instegator of the RPC. // By convention it should be an IETF language tag: // Language string } Request describes the request header sent by the client to the server. A non-zero request header is sent at the beginning of the RPC call, followed by the positional args. Thereafter a zero request header is sent before each streaming arg, terminated by a non-zero request header with EndStreamArgs set to true. type Response struct { // Error in processing the RPC at the server. Implies EndStreamResults. Error error // EndStreamResults is true iff no more streaming results will be sent; the // remainder of the stream consists of NumPosResults positional results. EndStreamResults bool // NumPosResults is the number of positional results, which immediately follow // on the response stream. After these results, no further data will be sent // on the response stream. NumPosResults uint64 // TraceResponse maintains the vtrace context between clients and servers. // In some cases trace data will be included in this response as well. TraceResponse vtrace.Response // AckBlessings is true if the server successfully recevied the client's // blessings and stored them in the server's blessings cache. AckBlessings bool } Response describes the response header sent by the server to the client. A zero response header is sent before each streaming arg. Thereafter a non-zero response header is sent at the end of the RPC call, right before the positional results. type Server interface { // AddName adds the specified name to the mount table for this server. // AddName may be called multiple times. AddName(name string) error // RemoveName removes the specified name from the mount table. RemoveName may // be called multiple times. RemoveName(name string) // Status returns the current status of the server, see ServerStatus for // details. Status() ServerStatus // Closed returns a channel that will be closed after the server is shut down. Closed() <-chan struct{} } Server defines the interface for managing a server that receives RPC calls. type ServerCall interface { // Security returns the security-related state associated with the call. Security() security.Call // Suffix returns the object name suffix for the request. Suffix() string // LocalEndpoint returns the Endpoint at the local end of // communication. LocalEndpoint() naming.Endpoint // RemoteEndpoint returns the Endpoint at the remote end of // communication. RemoteEndpoint() naming.Endpoint // RemoteAddr returns the net address of the peer. RemoteAddr() net.Addr // GrantedBlessings are blessings granted by the client to the server // (bound to the server). Typically provided by a client to delegate // to the server, allowing the server to use the client's authority to // pursue some task. // // Can be nil, indicating that the client did not delegate any // authority to the server for this request. // // This is distinct from the blessings used by the client and // server to authenticate with each other (RemoteBlessings // and LocalBlessings respectively). GrantedBlessings() security.Blessings // Server returns the Server that this context is associated with. Server() Server } ServerCall defines the in-flight context for a server method call, not including methods to stream args and results. ServerOpt is the interface for all Server options. ServerState represents the 'state' of the Server. const ( // ServerActive indicates that the server is 'active'. ServerActive ServerState = iota // ServerStopping indicates that the server has been asked to stop and is // in the process of doing so. It may take a while for the server to // complete this process since it will wait for outstanding operations // to complete gracefully. ServerStopping // ServerStopped indicates that the server has stopped. It can no longer be // used. ServerStopped ) func (i ServerState) String() string type ServerStatus struct { // The current state of the server. State ServerState // ServesMountTable is true if this server serves a mount table. ServesMountTable bool // PublisherStatus returns the status of the last mount or unmount // operation for every combination of name and server address being // published by this Server. PublisherStatus []PublisherEntry // Endpoints contains the set of endpoints currently registered with the // mount table for the names published using this server including all // proxied addresses. Endpoints []naming.Endpoint // ListenErrors contains the set of errors encountered when listening on // the network. Entries are keyed by the protocol, address specified in // the ListenSpec. ListenErrors map[struct{ Protocol, Address string }]error // ProxyErrors contains the set of errors encountered when listening on // proxies. Entries are keyed by the name of the proxy specified in the // ListenSpec. ProxyErrors map[string]error // Dirty will be closed if a status change occurs. Callers should // requery server.Status() to get the fresh server status. Dirty <-chan struct{} } type Stream interface { // Send places the item onto the output stream, blocking if there is no buffer // space available. Send(item interface{}) error // Recv fills itemptr with the next item in the input stream, blocking until // an item is available. Returns io.EOF to indicate graceful end of input. Recv(itemptr interface{}) error } Stream defines the interface for a bidirectional FIFO stream of typed values. type StreamServerCall interface { Stream ServerCall } StreamServerCall defines the in-flight context for a server method call, including methods to stream args and results. UniversalServiceMethods defines the set of methods that are implemented on all services. TODO(toddw): Remove this interface now that there aren't any universal methods? Or should we add VDL-generated Signature / MethodSignature / Glob methods as a convenience? Package rpc imports 18 packages (graph) and is imported by 295 packages. Updated 2020-06-01. Refresh now. Tools for package owners.
https://godoc.org/v.io/v23/rpc
CC-MAIN-2020-34
en
refinedweb
Key Takeaways - CQRS and Event Sourcing require specific infrastructure support for storage (the Event Store) and transmission (the Messaging Hub) of Commands, Queries, and Events. - The variety in messaging patterns used for supporting CQRS and Event Sourcing can be supported with a combination of existing middleware tools, such as Kafka and AMQP, but all-in-one solutions such as Axon Server are an attractive alternative. - Getting Axon Server up and running is best done in a way that makes the installation itself "stateless", in the sense that we split the configuration into fixed and environment-specific settings. - Axon Server Enterprise Edition adds the ability to run a cluster, but individual nodes can have pretty specific identities with large differences in the provided services. This impacts deployment strategies. - Adding Access Control (tokens and user accounts) and TLS to Axon Server is pretty easy to accomplish. CQRS and Event Sourcing in Java Modern message-passing and event-driven applications have requirements that differ significantly from traditional enterprise applications. A definite sign is the shift in focus from guaranteed delivery of individual messages, where the responsibility for delivery is mostly on the middleware, to “smart endpoints and dumb pipes,” where it is up to the application to monitor delivery and punctuality. A result is that the Quality of Service requirements on the messaging infrastructure are more about throughput than delivery guarantees; a message not reaching its intended target is something sender as well as receiver must solve, as they are fundamentally responsible for the business requirements impacted by such a failure, and are best capable of determining a proper response. Another change happening is a direct consequence of the steadily decreasing price of storage and processing power, as well as the increased flexibility in assigning those resources to application components: Command Query Responsibility Segregation, shortened to CQRS, and Event Sourcing. Rather than having a single component in your application landscape that manages the “Golden Record” for updates as well as queries, those two responsibilities are pulled apart, and multiple sources for querying are provided. The Command components, generally the low-frequency components of the equation, can optimize for validation and storage. Validated changes are then announced to the rest of the enterprise using Events, where (multiple) Query components use them to build optimized models. The increased usage of forward caches and batched copies were an early warning sign that this architectural pattern was desperately needed and query models with replayable Event Stores formalize many of the solutions needed here. Event Sourcing advances on this by defining the current state of an entity using the sequence of events that led to it. This means that, rather than keeping an updatable store of records, we use an append-only event store, allowing us to use Write-Once semantics and gain an incorruptible audit trail at the same time. To support these changes we see both traditional components as databases and message oriented middleware extended with the required functionality, as well as new, purpose built, infrastructure components. In the world of Java and Kotlin software development the Open Source Axon Framework provides the leading implementation of both the CQRS and the Event Sourcing paradigms, but it only provides a solution for the individual modules of the application. If you keep the application together in a monolithic setup, which admittedly provides the quickest way to get up and running for a greenfield development effort, it feels like a waste not to be able to take advantage of its support for a more distributed architecture. In itself the architecture of an Axon-based application lends itself to being split up quite easily, or “strangled”, as has become the more popular term. The question is then how we can support the messaging and event store implementations. The architecture of an CQRS based application Typical CQRS applications have components exchanging commands and events, with persistence of the aggregates handled via explicit commands, and query models optimized for their usage and built from the events that report on the aggregate’s state. The aggregate persistence layer in this setup can be built on RDBMS storage layers or NoSQL document components, and standard JPA/JDBC based repositories are included in the framework core. The same holds for storage of the query models. The communication for exchanging the messages can be solved with most standard messaging components, but the usage patterns do favour specific implementations for the different scenarios. We can use pretty much any modern messaging solution for the publish-subscribe pattern, as long as we can ensure no messages are lost, because we want the query models to faithfully represent the aggregate’s state. For commands we need to extend the basic one-way messaging into a request-reply pattern, if only to ensure we can detect the unavailability of command handlers. Other replies might be the resulting state of the aggregate, or a detailed validation failure report if the update was vetoed. On the query side, simple request-reply patterns are not sufficient for a distributed microservices architecture; we also want to look at scatter-gather and first-in patterns, as well as streamed results with continuing updates. For Event Sourcing the aggregate persistence layer can be replaced with a Write-Once-Read-Many layer that captures all events resulting from the commands, and provides replay support for specific aggregates.Those same replays can be used for the query models, allowing us to either re-implement them using memory stores, or provide resynchronization if we suspect data inconsistency. A useful improvement is the use of snapshots, so we can prevent the need for replaying a possibly long history of changes, and the Axon Framework provides a standard snapshotting implementation for aggregates. If we now take a look at what we have collected in infrastructure components for our application, we need the following: - A “standard” persistence implementation for state-stored aggregates and query models. - A Write-Once-Read-Many persistence solution for Event-Sourced aggregates. - A messaging solution for: - Request-reply - Publish-subscribe with at-least-once delivery - Publish-subscribe with replay - Scatter-gather - Request-streaming-reply The Axon Framework provides additional modules to integrate a range of Open Source products, such as Kafka and AMQP based solutions for event distribution. However, unsurprisingly, AxonIQ’s own Axon Server can also be used as a one-in-all solution. This article series is about what you need to do to get it installed and running, starting with a simple local installation, and progressing to Docker based installations (including docker-compose and Kubernetes) and VMs “in the Cloud”. Setting up the test To start, let’s consider a small program to demonstrate the components we’re adding to our architecture. We’ll use Spring Boot for the ease of configuration, and Axon has a Spring Boot starter that will scan for the annotations we use. As a first iteration, we’ll keep it to just a simple application that sends a command that causes an event. For this to work we need to handle the command: @CommandHandler public void processCommand(TestCommand cmd) { log.info("handleCommand(): src = '{}', msg = '{}'.", cmd.getSrc(), cmd.getMsg()); final String eventMsg = cmd.getSrc() + " says: " + cmd.getMsg(); eventGateway.publish(new TestEvent(eventMsg)); } The command and event here are simple value objects, the first specifying the source and a message, the other only a message. The same class also defines the event handler that will receive the event published above: @EventHandler public void processEvent(TestEvent evt) { log.info("handleEvent(): msg = '{}'.", evt.getMsg()); } To complete this app we need to add a “starter” that sends the command: @Bean public CommandLineRunner getRunner(CommandGateway gwy) { return (args) -> { gwy.send(new TestCommand("getRunner", "Hi there!")); SpringApplication.exit(ctx); }; } For this first version we also need a bit of supporting code to compensate for the lack of an actual aggregate as CommandHandler, because the Axon Framework wants every command to have some kind of identification that allows subsequent commands for the same receiver to be correlated.The full code is available for download on GitHub, and apart from the above it contains the TestCommand and TestEvent classes, and configures a routing strategy based on random keys, effectively telling Axon not to bother. This needs to be configured on the CommandBus implementation, and here we have to start looking at implementations for our architectural components. If we run the application without any specific commandbus and eventbus implementations, the Axon runtime will assume a distributed setup based on Axon Server, and attempt to connect to it. Axon Server Standard Edition is freely available under the AxonIQ Open Source license, and a precompiled package of the Axon Framework and Server can be obtained from the AxonIQ website. If you put the executable JAR file in its own directory and run it using Java 11, it will start using sensible defaults. Please note that the runs below use version “4.3”, your situation may differ depending on when you download it. $) With that in place, our test application starts up, connects to Axon Server, and runs the test. ...handleCommand(): src = "getRunner", msg = "Hi there!". ...handleEvent(): msg = "getRunner says: Hi there!". For good measure, run it a few times, and if you’re lucky, you may actually see more than one event handled. If it doesn’t, add a “Thread.sleep(10000)” between the sending of the command and the call to “SpringApplication.exit()” and try again. This small test shows that we had our client app (let’s call it that, since it is a client of Axon Server, and that is where we’re going) connecting to Axon Server, and used it to send the command to the handler, after which it sends it back to the client for handling. The handler sent an event, which went the same route, albeit on the EventBus rather than the CommandBus. This event was stored in Axon Server’s Event Store, and the Event Handler will get all events replayed when it connects initially. Actually, if you append the current date and time, say by simply appending a “new Date()” to the message, you’ll see that the events are in fact nicely ordered as they came in. From an Axon Framework perspective there are two kinds of Event processors: Subscribing and Tracking. A Subscribing Event Processor will subscribe itself to a stream of events, starting at the moment of subscribing. A Tracking Event Processor instead tracks its own progress in the stream and will by default start with requesting a replay of all events in the store. You can also see this as getting the events pushed (for Subscribing Event processors) versus pulling events yourself (for Tracking Event Processors), and the implementation in the Framework actually works this way. The two most important applications of this difference to be aware of are for building query models and for Event Sourced aggregates. This is because it is there that you want to be sure to have the complete history of events. We’ll not go into the details here and now, but you can read about it in the Axon Reference Guide at. In our test program, we can add a configuration to select the Event processor type: @Profile("subscribing") @Component public class SubscribingEventProcessorConfiguration { private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); @Autowired public void configure(EventProcessingConfigurer config) { log.info("Setting using Subscribing event processors."); config.usingSubscribingEventProcessors(); } } With this, if you start the application with Spring profile “subscribing” active, you’ll see only one event processed, which will be the one sent in that run. Start the program without this profile and you get the default mode, which is tracking, and all prior events (including those sent while in subscribing mode) will be there again. Running and Configuring Axon Server Now that we have a client ready to use, let’s take a bit more detailed look at the Axon Server side of things, as we expect things here concerning both the message handling and the event storage. In the previous section we created a directory named “axonserver-se” and put the two JAR files for it in there. Now, while it is running you’ll see it has generated a “PID” file, containing the process ID, and a “data” directory with a database file and the event store in a directory named “default”. For the moment, all the event store contains is one file for event storage, and another for snapshots. These files will appear pretty large, but they are “sparse” in the sense that they have been pre-allocated to ensure availability of room, while as yet containing only a few events. What we might have expected, but not seen, is a log file with a copy of Axon Server’s output and that is the first thing we’ll remedy. Axon Server is a Spring Boot based application, and that allows us to easily add logging settings. The name of the default properties file is “axonserver.properties”, so if we create a file with that name and place it in the directory where Axon Server runs, the settings will be picked up. Spring Boot also looks at a directory named “config” in the current working directory, so if we want to create a scripted setup, we can put a file with common settings in the intended working directory, while leaving the “config/axonserver.properties” file for customizations. The simplest properties needed for logging are those provided by all Spring-boot applications: logging.file=./axonserver.log logging.file.max-history=10 logging.file.max-size=10MB Using these, after the initial banner, logging will be sent to “axonserver.log”, and it will keep at most 10 files of at most 10MiB in size, which cleans up nicely. Next let’s identify some other properties that we might want to define as “common”: axoniq.axonserver.event.storage=./events This will give the event store its own directory because we can expect it to be the one growing continuously, and we don’t want other applications to be impacted by a possible “disk full” situation. We can use disk mounts or symbolic links to let this location point to the volume where we actually want to use disk space, but this gives us the possibility to at least make the configuration common. axoniq.axonserver.snapshot.storage=./events To reduce the number of events in a replay, we can create snapshots. By default, they will be stored under the same location as the events themselves, but we can separate them if we want. Since they are tightly linked however we’ll keep them together. axoniq.axonserver.controldb-path=./data We’ll leave the ControlDB in its default location, and can use mounts or symlinks to put it on a separate volume. The ControlDB generally won’t take much room, so we can give it its own location, without worrying about disk usage too much. axoniq.axonserver.pid-file-location=./events As we saw, the PID file is by default generated in the current working directory of Axon Server. By changing it to the same location as the ControlDB, we have a single spot for relatively small files, while making the current working directory itself essentially Read-Only. logging.file=./data/axonserver.log This one is highly dependent on how strict you want log files to be separated from the rest of the files, as you could also opt to give Axon Server a directory under /var/log, and add settings for log rotation, or even use something like “ logging.config=logback-spring.xml” and use that for more detailed settings. axoniq.axonserver.replication.log-storage-folder=./log This is an Axon Server Enterprise Edition setting for the replication log, which stores the changelog for data distributed to the other nodes in a cluster. The amount of data here is configurable in the sense that you can set the interval for its cleaning when all committed changes will be removed from the log. With these settings, we have structured the way Axon Server will use disk space and set it up so we can use network or cloud storage, in such a way that we’re prepared for deploying it in a CI/CD pipeline. In the repo, I will also add startup and shutdown scripts that will run Axon Server in the background. Protecting our setup Since we definitely “need protection”, we’ll set up access control and TLS on our server. Access control will make a token required for requests to the REST and gRPC endpoints, and the UI will require an account. Additionally, some features require specific roles, where Enterprise Edition has a more elaborate set of roles and additionally allows roles to be specified per context. To start with Standard Edition, we can enable access control by setting a flag in the properties file and providing the token: axoniq.axonserver.accesscontrol.enabled=true axoniq.axonserver.accesscontrol.token=my-token You can use command line tools such as uuidgen to generate random tokens, which will be used for authentication. Now if you start Axon Server with these, not only will you need to specify the token to the CLI tool, but also the UI will suddenly require you to login, even though we haven’t created any users yet. We can solve that one easily using the CLI tool: $ ./axonserver-cli.jar register-user -t my-token -u admin -p test -r ADMIN $ ./axonserver-cli.jar users -t my-token Name admin $ With this in place you can log in again. Additionally, if you want to make life a bit easier for the CLI, you can create a directory named “security” and copy the token to a file named “.token” in there. The CLI will check for such a directory and file relative to the current working directory: $ mkdir security $ echo my-token > security/.token $ chmod 400 security/.token && chmod 500 security $ ./axonserver-cli.jar users Name Admin $ On the client side we need to specify the token as well: $ axonserver-quicktest-4.3-SNAPSHOT-exec.jar 2020-04-23 09:46:10.914 WARN 1438 --- [ main] o.a.a.c.AxonServerConnectionManager : Connecting to AxonServer node [localhost]:[8124] failed: PERMISSION_DENIED: No token for io.axoniq.axonserver.grpc.control.PlatformService/GetPlatformServer ********************************************** * * * !!! UNABLE TO CONNECT TO AXON SERVER !!! * * * * Are you sure it's running? * * Don't have Axon Server yet? * * Go to: * * * ********************************************** To suppress this message, you can - explicitly configure an AxonServer location, - start with -Daxon.axonserver.suppressDownloadMessage=true 2020-04-23 09:46:10.943 WARN 1438 --- [.quicktester]-0] o.a.e.TrackingEventProcessor : Fetch Segments for Processor 'io.axoniq.testing.quicktester' failed: No connection to AxonServer available. Preparing for retry in 1s 2020-04-23 09:46:10.999 WARN 1438 --- [ main] o.a.c.gateway.DefaultCommandGateway : Command 'io.axoniq.testing.quicktester.msg.TestCommand' resulted in org.axonframework.axonserver.connector.command.AxonServerCommandDispatchException(No connection to AxonServer available) $ AXON_AXONSERVER_TOKEN=my-token axonserver-quicktest-4.3-SNAPSHOT-exec.jar 2020-04-23 09:46:48.287 INFO 1524 --- [mandProcessor-0] i.a.testing.quicktester.TestHandler : handleCommand(): src = "QuickTesterApplication.getRunner", msg = "Hi there!". 2020-04-23 09:46:48.352 INFO 1524 --- [.quicktester]-0] i.a.testing.quicktester.TestHandler : handleEvent(): msg = "QuickTesterApplication.getRunner says: Hi there!". $ Given this, the next step is to add TLS, and we can do this with a self-signed certificate as long as we’re running locally. We can use the “openssl” toolset to generate an X509 certificate in PEM format to protect the gRPC connection, and then package key and certificate in a PKCS12 format keystore for the HTTP port. The following will: - Generate a Certificate Signing Request using an INI style configuration file, which allows it to work without user interaction. This will also generate an unprotected private key of 2048 bits length, using the RSA algorithm. - Use this request to generate and sign the certificate, which will be valid for 365 days. - Read both key and certificate, and store them in a PKCS12 keystore, under alias “axonserver”. Because this cannot be an unprotected store, we give it password “axonserver”. $ $ We now have: - tls.csr: The certificate Signing Request, which we no longer need. - tls.key: The private key in PEM format. - tls.crt: The certificate in PEM format. - tls.p12: The keystore in PKCS12 format. To configure these in Axon Server, we use: # SSL for the HTTP port server.ssl.key-store-type=PKCS12 server.ssl.key-store=tls.p12 server.ssl.key-store-password=axonserver server.ssl.key-alias=axonserver security.require-ssl=true # SSL enabled for gRPC axoniq.axonserver.ssl.enabled=true axoniq.axonserver.ssl.cert-chain-file=tls.crt axoniq.axonserver.ssl.private-key-file=tls.key The difference between the two approaches stems from the runtime support used: the HTTP port is provided by Spring-boot using its own “server” prefixed properties, and it requires a PKCS12 keystore. The gRPC port instead is set up using Google’s libraries, which want PEM encoded certificates. With these added to “axonserver.properties” we can restart Axon Server, and it should now announce “Configuration initialized with SSL ENABLED and access control ENABLED”. On the client side we need to tell it to use SSL, and, because we’re using a self-signed certificate, we have to pass that too: axon.axonserver.ssl-enabled=true axon.axonserver.cert-file=tls.crt Please note that I have added “axonserver.megacorp.com” as hostname to the system’s “hosts” file, so other applications can find it and the name matches the one in the certificate. With this, our quick tester can connect using TLS (removing timestamps and such): ...Connecting using TLS... ...Requesting connection details from axonserver.megacorp.com:8124 ...Reusing existing channel ...Re-subscribing commands and queries ...Creating new command stream subscriber ...Worker assigned to segment Segment[0/0] for processing ...Using current Thread for last segment worker: TrackingSegmentWorker{processor=io.axoniq.testing.quicktester, segment=Segment[0/0]} ...Fetched token: null for segment: Segment[0/0] ...open stream: 0 ...Shutdown state set for Processor 'io.axoniq.testing.quicktester'. ...Processor 'io.axoniq.testing.quicktester' awaiting termination... ...handleCommand(): src = "QuickTesterApplication.getRunner", msg = "Hi there!". ...handleEvent(): msg = "QuickTesterApplication.getRunner says: Hi there!". ...Released claim ...Worker for segment Segment[0/0] stopped. ...Closed instruction stream to [axonserver] ...Received completed from server. So how about Axon Server EE From an operations perspective, running Axon Server Enterprise Edition is not that different from Standard Edition, with as most prominent differences: - You can have a multiple instances working as a cluster, - The cluster supports more than one context (in SE you only have “default”), - Access control has a more detailed set of roles, - And applications get their own tokens and authorizations. On the connectivity side, we get an extra gRPC port used for communication between the nodes in the cluster, which defaults to port 8224. article we will not dive preclude the possibility of a tie. Axon Server Clustering A node in an Axon Server cluster. A consequence of the support for multiple contexts and different roles, each settable per node, is that those individual nodes can have pretty big differences in the services they provide to the client applications. In that case increasing the number of nodes does not have the same effect on all contexts: although the messaging load will be shared by all nodes supporting a context, the Event Store has to distribute the data to an additional node, and a majority needs to acknowledge storage before the client can continue. Another thing to remember is that the “ACTIVE_BACKUP” and “PASSIVE_BACKUP” roles have pretty (Raft) specific meanings, even though the names may suggest different interpretations from the world of High Availability. In general, an Axon Server node’s role does not change just to solve an availability problem. The cluster can keep functioning as long as a majority of the nodes is available for a context, but if this majority is lost for the “_admin” context, cluster configuration changes cannot be committed either. For a local running cluster, we need to make a few additions to our “common” set of properties, the most important of which concern cluster initialization: When a node starts, it does not yet know if it will become the nucleus of a new cluster, or will be added to an existing cluster with a specific role. So if you start Axon Server EE and immediately start connecting client applications, you will receive an error message indicating that there is no initialized cluster available. If you just want a cluster with all nodes registered as “PRIMARY”, you can add the autocluster properties: axoniq.axonserver.autocluster.first=axonserver-1.megacorp.com axoniq.axonserver.autocluster.contexts=_admin,default With these added, the node whose hostname and cluster-internal port matches the “first” setting, with no port specified of course defaulting to 8224, will initialize “default” and “_admin” contexts if needed. The other nodes will use the specified hostname and port to register themselves to the cluster, and request to be added to the given contexts. A typical solution for starting a multi-node cluster on a single host is to use the port properties to have them expose themselves next to each other. The second node would then use: server.port=8025 axoniq.axonserver.port=8125 axoniq.axonserver.internal-port=8225 axoniq.axonserver.name=axonserver-2 axoniq.axonserver.hostname=localhost The third can use 8026, 8126, and 8226. In the next installment we’ll be looking at Docker deployments, and we’ll also customize the hostname used for the cluster-internal communication. Access control for the UI and Client Applications Maybe a little explanation is needed around enabling access control, especially from the perspective of the client. As mentioned above, the effect is that client applications must provide a token when connecting to Axon Server. This token is used for both HTTP and gRPC connections, and Axon Server uses a custom HTTP header named “AxonIQ-Access-Token” for this. For Standard Edition there is a single token for both connection types, while Enterprise Edition maintains a list of applications, and generates UUIDs as token for each. The cluster-internal port uses yet another token, which needs to be configured in the properties file using “ axoniq.axonserver.internal-token”. A separate kind of authentication possible is using username and password, which works only for the HTTP port. This is generally used for the UI, which shows a login screen if enabled, but it can also be used for REST calls using BASIC authentication: $ curl -u admin:test [{"userName":"admin","password":null,"roles":["ADMIN@*"]}] $ The CLI is also a kind of client application, but only through the REST API. As we saw earlier you can use the token to connect when access control is enabled, but if you try this with Axon Server EE, you will notice that this road is closed. The reason is the replacement of the single, system-wide token with the application specific tokens. Actually, there still is a token for the CLI, but it is now local per node and generated by Axon Server, and it is stored in a file called “security/.token”, relative to the node’s working directory. We also encountered this file when we looked at providing the token to the CLI. We will get back to this in part two, when we look at Docker and Kubernetes, and introduce a secret for it. End of part one This ends the first installment of this series on running Axon Server. In part two we will be moving to Docker, docker-compose, and Kubernetes, and have some fun with the differences they bring us concerning volume management. See you next time!
https://www.infoq.com/articles/axon-server-cqrs-event-sourcing-java/?topicPageSponsorship=c1246725-b0a7-43a6-9ef9-68102c8d48e1&itm_source=articles_about_java&itm_medium=link&itm_campaign=java
CC-MAIN-2020-34
en
refinedweb
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> void *ftp_fopen ( U8* fname, /* Pointer to name of file to open. */ U8* mode); /* Pointer to mode of operation. */ The ftp_fopen function opens a file for reading or writing. The argument fname specifies the name of the file to open. The mode defines the type of access permitted for the file. It can have one of the following values: The ftp_fopen function is in the FTP_uif.c module. The prototype is defined in net_config.h. The ftp_fopen function returns a pointer to the opened file. The function returns NULL if it cannot open the file. ftp_fclose, ftp_fread, ftp_fwrite void *ftp_fopen (U8 *fname, U8 *mode) { /* Open file 'fname' for reading or writing. */ return (fopen ((const char *)fname, (const.
https://www.keil.com/support/man/docs/rlarm/rlarm_ftp_fopen.htm
CC-MAIN-2020-34
en
refinedweb
User:Eoconnor/ISSUE-41 Zero-edit Change Proposal for ISSUE-41 Summary The basic question of ISSUE-41 is (as asked on public-html) "should HTML 5 provide an explicit means for others to define custom elements and attributes within HTML markup?" In a word, no. HTML5's existing extension points provide all the features needed to solve the use cases that give rise in some to the desire for DE. ." (from the WHATWG FAQ). Contents - 1 Zero-edit Change Proposal for ISSUE-41 - 1.1 Summary - 1.2 Rationale - 1.2.1 HTML's exisiting extension points - 1.2.2 Use Case 1 - 1.2.3 Use Case 2 - 1.2.4 Use Case 3 - 1.2.5 Use Case 4 - 1.2.6 Use Case 5 - 1.2.7 Use Case 6 - 1.2.8 Use Case 7 - 1.3 Details - 1.4 Impact - 1.5 References - 1.6 Contributors Rationale I've gathered together many of the use cases for DE I could find posted to public-html, each attributed to the original email, blog post, or such which defined it. I've also tried to consolidate similar or identical use cases together so as to avoid redundancy. All but one of these use cases can be addressed with the existing HTML extension points. The remaining use case is best left unaddressed, as discussed later on in this CP. HTML's exisiting extension points HTML has many existing extension points for authors to use. As listed in section 2.2.2 Extensibility: - an inline or server-side scripts. - Authors can create plugins and invoke them using the <embed> element. This is how Flash works. - Authors can extend APIs using the JavaScript prototyping mechanism. This is widely used by script libraries, for instance. -. Vendors unwilling to add additional extension points at this time Representatives of browser vendors have expressed reluctance to add additional extension points to HTML, including Microsoft, who think DE "isn't important enough to justify changes [to the spec] at this time" (source). Use Case 1 - Annotate structured data that HTML has no semantics for, and which nobody has annotated before, and may never again, for private use or use in a small self-contained community. (source) Structured data can be published in HTML by using class="" and rel="" as in Microformats, with the Microdata feature, with HTML5+RDFa, or several of the other existing extension points, both separately and together. Use Case 2 - Site owners want a way to provide enhanced search results to the engines, so that an entry in the search results page is more than just a bare link and snippet of text, and provides additional resources for users straight on the search page without them having to click into the page and discover those resources themselves. (source) A search engine could define a Microdata or RDF vocabulary for publishers to use. Use Case 3 - Remove the need for feeds to restate the content of HTML pages (i.e. replace Atom with HTML). (source) The hAtom microformat solves this use case, and it is built on top of the existing extension points of HTML. Use Case 4 - Remove the need for RDF users to restate information in online encyclopedias (i.e. replace DBpedia). (source) The HTML5+RDFa spec being worked on by this WG can address this use case, as can the Microdata feature. Use Case 5 -. (source 1, source 2) As with use case 1, such extensions can be published in HTML by using class="" and rel="" as in Microformats, with the Microdata feature, with HTML5+RDFa, or several of the other existing extension points, both separately and together. Name collisions can be avoided in several different ways, and authors do not need to wait for browser vendors to implement anything new before they can start using their extension. Use Case 6 - Round-trip metadata across sessions, maintaining a strong metadata association that is resilient to subsequent editing operations by other user agents. Both whole HTML files and smaller document fragments need to round-trip. Such metadata may include information about a WYSIWYG editor's state, author information, relationships between this document and others, or a reference to the document's original source. (source) This use case can be addressed with the existing extension points of HTML: - Editor state informaiton can be placed in data-*=""attributes. - Author information can be represented by <meta name=author>; authoris one of the standard metadata names. - Relationships between this document and others can be expressed using the rel=""attribute. - References to the document's original source can be expressed using rel=alternateor rel=bookmark, both standard link relations, or a custom link relation could be used. Use Case 7 - An existing software product currently outputs XHTML documents with other, non-SVG and non-MathML Namespaces-in-XML content mixed in. Users of this product would like to publish such content as text/html, and to have content published as such pass HTML5 conformance testing. This use case cannot be addressed by use of HTML's existing extension points. This is a feature, not a bug. As stated in section 2.2.2 Extensibility: "Vendor-specific proprietary user agent extensions to this specification are strongly discouraged. Documents must not use such extensions, as doing so reduces interoperability and fragments the user base, allowing only users of specific user agents to access the content in question." Of course, such software can continue to use XHTML. One of the other DE Change Proposals describes three classes of such extensions. Platform Extensions "Platform Extensions" such as SVG and MathML that define new types of content that can be rendered in a browser. These extensions are expected to be vendor-neutral and have a specification. They may become part of HTML in the future. Such extensions should be coordinated among browser vendors within this working group. Language Extensions "Language Extensions" such as RDFa that define new attributes on top of nodes from other namespaces. As argued on public-html, ..." Vendor-specific Experimental Extensions "Vendor-specific Experimental Extensions" such as the experimental features that Webkit and Mozilla have created. The spec already provides for this with the vendor--feature="" pattern for vendor-specific attributes. Just as with -vendor-foo CSS properties, use of such attributes should not be considered conforming. Not providing such a feature for element names is intentional; for an excellent argument against such a feature, see this email to public-html. Details No change to the spec. Impact Positive Effects We avoid adding complex new features without concrete use-cases to the already complex web platform. Negative Effects If a particular use-case isn't addressed, users may end up attempting to extend HTML themselves in a non-conformant manner. This has been a potential problem for decades in HTML, however, and we haven't seen very much actual damage. As well, the majority of extensibility use-cases have already been addressed in HTML, so that further limits such potential damage. Conformance Classes Changes No change. Risks None. After all, we can always add further extension mechanisms later should the need arise. References References are linked inline. Contributors - Initial draft of this Change Proposal by Ian Hickson - Edward O'Connor - Tab Atkins Other collaborators welcome!
https://www.w3.org/html/wg/wiki/User:Eoconnor/ISSUE-41
CC-MAIN-2020-34
en
refinedweb
The plot generated by Matplotlib typically has a lot of padding around it. This is useful if you are viewing or displaying the plot in isolation. However, when the plot is embedded inside another document, typically extra padding is added around and makes the plot look tiny. The solution is to reduce or remove the padding around the plot generated by Matplotlib. This can be done by configuring the bounding box used for the plot while saving it to disk: import matplotlib.pyplot as mplot mplot.savefig("foo.pdf", bbox_inches="tight") This makes the bounding box tight around the plot, while still giving enough space for the text or lines on the plot periphery. If you want a plot with zero padding around it: import matplotlib.pyplot as mplot mplot.savefig("foo.pdf", bbox_inches="tight", pad_inches=0) Personally, I find this too tight, but it might be useful in some situations. Tried with: Matplotlib 1.4.3, Python 2.7.3 and Ubuntu 14.04 One thought on “How to remove padding around plot in Matplotlib” Thank you, works fine for me… LikeLiked by 1 person
https://codeyarns.com/2015/07/29/how-to-remove-padding-around-plot-in-matplotlib/?shared=email&msg=fail
CC-MAIN-2020-34
en
refinedweb
ActionCable Part 3: can't write unknown attribute `user_id`, what am I missing? Having some trouble figuring out what I'm missing with submitting a message in the chatroom, I get the error for my messages_controller.rb saying that can't write unknown attribute `user_id. However checking the source files I saw that my messages_controller.rb was the exact same as the example under my create saying message.user = current_user is incorrect: Error encountered at about 13min into tutorial. def create message = @chatroom.messages.new(message_params) message.user = current_user message.save redirect_to @chatroom end GoRails Screencast code from part 3: My Code: Running ruby 2.3.0p0 (2015-12-25 revision 53290) [x86_64-darwin16], Rails 5.0.0.1 Thanks for your help, -RS I read this earlier and wasn't quite sure what it was, but that would explain it! :) Happy to help if you have any other questions.
https://gorails.com/forum/actioncable-part-3-can-t-write-unknown-attribute-user_id-what-am-i-missing
CC-MAIN-2020-34
en
refinedweb
Important: Please read the Qt Code of Conduct - Issue with threading and opencv Hello All, I have written a program in opencv to fetch an image form camera and save in it folder. There is another program which takes the program form the same folder and do some computation of the image. I combined both the program and put the second logic in thread. Now the problem is what is figured is: The main program tries to save the image in the same program and at the same time the thread tries to read different image form the same folder. So sometimes the image is saved but only half of the image is saved and other half contains no data. Also the thread breaks. I tried using QMutex but the issue was the thread break if the main thread is already lock. The summary of the problem is: ->How can i provide synchronization between the thread. ->How should i make the other thread to wait until the lock is not released. a solution doesn't quit come to my mind, but I have to ask, why do save the image in the HD first ? Seems like an unnecessary step ? @J.Hilk : Yes i agree with you its unnessary to save the image. In future we will store it internally in queue. But this is an interesting use case. I wanted to know the problem exactly so that i can avoid it in near future. - J.Hilk Moderators last edited by J.Hilk @Kira how do you save the file, also with Qt ? Than maybe QSaveFile will help That said, maybe using QFileSystemWatcher to monitor your saved file should also reduce the error frequency ? @J.Hilk : I am using imwrite an opencv function to save the image. Actually the problem arise after i start using thread. So i just want to know how the thread is causing the issue. @Kira The thread writing the file should emit a signal when the file was written and closed. The thread consuming files should only read files which were already notified through that signal. This way you would avoid reading and writing same file from two different threads at the same time. @jsulm : Just one question for clarification. If i use two different mutex, will the one mutex will inform about other if i tried to lock ? Or should i use same mutex where the conflicting situation is expected ? Also i was going through opencv i got the following warning which was not seen before: Warning:Premature end of JPEG file @Kira said in Issue with threading and opecv: If i use two different mutex, will the one mutex will inform about other if i tried to lock ? No. But with the solution I suggested there is no need at all for any mutexes. Just use signals/slots as I suggested with queued connection type. "Warning:Premature end of JPEG file" - I guess it's because the file is damaged. @jsulm : Thanks for highlighting the possibility of signal/slots. I will have to redesign my approach for implementing the same. Just couple of question: -> If i queue signal and slot will it affect my overall program. ie. Suppose i queue the signal/slots related to this program will it affect the other signal/slots written in my program . For ex: delay in execution of the signal/slots of the main program. -> How to make a mutex wait for the previous lock if we are not sure how much time it is going to take. - jsulm Qt Champions 2019 last edited by jsulm @Kira said in Issue with threading and opecv: Suppose i queue the signal/slots related to this program will it affect the other signal/slots written in my program . For ex: delay in execution of the signal/slots of the main program. Queued signals do not block anything - they are just in a queue. The only delay you will have is the slot when it is actually called - but that you will have anyway as you have to execute the code anyway, right? "How to make a mutex wait for the previous lock if we are not sure how much time it is going to take" - I don't understand this question. A mutex either blocks the thread which wants to lock if the mutex is already locked, or you can continue immediately. So, you already have this "waiting" if the mutex is already locked. @jsulm Queued signals do not block anything - they are just in a queue. The only delay you will have is the slot when it is actually called - but that you will have anyway as you have to execute the code anyway, right? : Yes Thanks for the help cleared a lot of implementation doubts. Just a small question. If i have a thread which acquires the lock and there is one more thread which requests for the locked resource. How will i tell the other thread wait until and unless the lock in the thread have not been unlocked. We can currently use the wait time but is there any way to tell when create a notification when the lock is released and tell the other thread to wait for that specific time. And one more thing how to highlight specific lines which we are answering. Which you have did for my earlier questions. :) - jsulm Qt Champions 2019 last edited by jsulm @Kira said in Issue with threading and opecv: How will i tell the other thread wait until That's exactly what a mutex is doing: it blocks the thread which tries to lock a mutex which is already locked by other thread, as soon as the other thread releases the mutex the OS will wake up one of the blocked (waiting) threads. So, you do not have to tell the thread anything. @jsulm : Hello i have implemented the save image function in a slot as suggested by you. But the problem is that the image is not being saved. I would like to share the given code: signals: void signalSaveImage(cv::Mat,String row,String column); slots: public slots: void saveImage(Mat finalImage,String row, String column); Slot Definition: void MainWindow::saveImage(Mat finalImage, String row, String column) { cout<<"Inside image save function"<<endl; cout<<"Row"<<row<<endl; cout<<"Column"<<column<<endl; String imagePath = "D:/testN/" String imageFormat = ".bmp"; try { cv::imwrite(imagePath+row+"_"+column+imageFormat,finalImage); } catch (cv::Exception& ex) { fprintf(stderr, "Exception in saving image: %s\n", ex.what()); } fprintf(stdout, "Saved PNG file with alpha data.\n"); cout<<"\n Entering inside save mutex"<<endl; cout<<"\n Completed image save function"<<endl; } Signal to slot connection: connect(this,SIGNAL(signalSaveImage(cv::Mat,String,String)),this,SLOT(saveImage(Mat,String,String)),Qt::QueuedConnection); MainWindow.cpp Emitting signal to save image: emit signalSaveImage(image,to_string(row),to_string(column)); But my problem is that my images are not being saved. @Kira said in Issue with threading and opencv: QueuedConnection forces a copy of the arguments, The MetaSystem doesn't know the type cv::Mat -> can't be transmitted via QueuedConnection. Check your console output you should have warning of unknown metatyps (declare and/or register) @J.Hilk : I have registered it. #include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { qRegisterMetaType< cv::Mat >("cv::Mat"); QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); } the data goes most likely out of scope, as the copy constructor of cv::Mat does actually no coping I had the same issues in my time with opencv. ended up converting it to a Image before transferring it from the opencv thread to the normal one. @J.Hilk : Just for clarification should i replace this part with cv::clone void saveImage(Mat finalImage,String row, String column); @J-Hilk : will try and also i tried to run the program in debug mode and getting the following error. Debugger redirects to following line of file of ostream: // MANIPULATORS template<class _Elem, class _Traits> inline basic_ostream<_Elem, _Traits>& __CLRCALL_OR_CDECL endl(basic_ostream<_Elem, _Traits>& _Ostr) { // insert newline and flush stream _Ostr.put(_Ostr.widen('\n')); _Ostr.flush(); return (_Ostr); } And the program quits. Can u please explain what may be the relevant reason for error @Kira said in Issue with threading and opencv: this is the stack trace after crash no, it isn't @jsulm : This is the only thing which i see after crash with exception triggered. Is there any other way to do so @Kira Yes. Now you need to go from top to bottom until you hit first source code file from your project and check it at the line mentioned in the stack trace. @jsulm : I tried doing it but the error is expected to come after thread running >150 times which is making the debugging process very difficult. I have also sample test case on github can you please go through it for any possible issues. @Kira said in Issue with threading and opencv: 150 times you can tell a breakpoint to be ignored for a certain amount of time @Kira I actually suggested to go through the stack trace after crash from top to bottom. No break points needed for that. @jsulm : @Kira said in Issue with threading and opencv: @jsulm : Are you referring to this: Please let me know in case of any issues This is the stack trace which i get posted earlier referring from line no 1. It does not point to line of code where program has a break. But line 16 and 17 point to the line of the code. Should i refer those line for particular cause of the program to break Please do let me know if i getting something wrong @jsulm : Line 16 is pointing to following line in bold: void ImageMapping::requestReWorkNew() { cout<<"Inside requestRework Newwwwwwwwww"<<endl; const int sleep = 50; int queueSize = queueImageName.size(); cout<<"Queue size Request Rework New: "<<queueSize<<endl; if(queueImageName.size()!=0){ //Get the current image number from the list of images mutex.lock(); _working = true; _abort = false; mutex.unlock(); cout<<"Emiting resume work"<<endl; emit resumeWork(); }else{ cout<<"Signalling retrying"<<endl; emit retrying(); } } This function is basically a slot triggered by following signal: connect(imageMapping, SIGNAL(finished()), imageMapping, SLOT(requestReWorkNew())); connect(imageMapping, SIGNAL(retrying()),imageMapping,SLOT(requestReWorkNew())); @Kira Are you sure this is the line? The actual line is mentioned in the "Line" column in the stack trace. So, it would be 2129. @Kira you can double click on the line (in QtCreator) of the stack trace and it will open the file at the correct line @jsulm : Actually i have recreated the issue.the program i am referring now is from github. I have cross checked the line which I highlighted is the same. If u want i can repost the image @J-Hilk @jsulm : guys have you gone through the code at github do you see any issue with the threading logic? :) @J-Hilk @jsulm : Hello guys sorry for the late update i was working on the issue. Would like to share the update. I trimmed down the issue further by first commenting out all the cout statements as the logs which was coming pointed to the ostream file of the cout. So the call stack pointed out to the opencv file stating unable to save the image. So i commented the save function to find any thing else was causing the issue. What i found that my program was still breaking and the call stack pointed to the thread function. I placed a counter for the signal and slot of the thread and it was found that it executed 666 times approx before breaking. @J-Hilk @jsulm :Guys thanks for your support i finally figured where exactly the problem was: There was issue with the logic which was used for implementation. As you can see the code on github. Instead of requesting the task in the dowork function every time a signal is raised and an slot is called to execute the task. This led to the calling of so many signals and slots within a given time that it caused the stack to overflow. The issue will be resolved by removing the signal slot loop. This issue was top priority, a big learning curve for my future implementations in dealing with signal and slot mechanism. Thanks for your support. Just one last question which i would like to be answered which triggered with this implementation. - Is there any limitation of how much slots can reside at a given time on a stack? As per my observation at a given time 666 approx signal/slots calls were made after which the program crashed. - Is the slot entry is cleared or maintained on the stack after the execution and can be increase the size of stack or transfer it to heap? Regards . @jsulm said in Issue with threading and opencv: . @Kira said in Issue with threading and opencv: -. Hi, great that you managed to work it out thumbsUp Sadly I'm quite a but under time pressure and was unable to take a closer look on your uploaded project :( To your question. The (default)stack limit is highly OS depended but In most cases, the max stack size of the main thread is significantly large, that you won't run into issues, even in highly recursive functions. IIRC, than on an ordinary desktop PCs, the call stack for Windows defaults to 1mb, 8mb for Linux, and 8-64mb for OSX For 2ndary threads, that can be quite different for macOS, that's reduced to 512 kbyte
https://forum.qt.io/topic/103817/issue-with-threading-and-opencv
CC-MAIN-2020-34
en
refinedweb
Asked by: Opening Control Panel applet Question Below is some code to open the control panel or one of its applets. TCError() is a wrapper for FormatMessage(). When (*szArg == 0) it will open the control panel on both Windows 7 and Windows 10. When (*szArg != 0) and szItemName is a valid name (e.g., L"Microsoft.System") it will open the applet on Windows 7. On Windows 10, IOpenControlPanel::Open fails with ERROR_ACCESS_DENIED. I am an administrator on both test machines. What's up? Thanks. [I would have started in a Win10 or security forum, but my own code is the only way I can produce the problem.] CoInitialize(NULL); IOpenControlPanel *pPanel = NULL; HRESULT hr = CoCreateInstance(CLSID_OpenControlPanel, NULL, CLSCTX_INPROC_SERVER, IID_IOpenControlPanel, (LPVOID*) &pPanel); if ( SUCCEEDED(hr) ) { hr = pPanel->Open(*szArg == 0 ? NULL : szItemName, NULL, NULL); if ( hr != S_OK ) { TCError(hr & 0xFFFF, L"IOpenControlPanel::Open"); } pPanel->Release(); } CoUninitialize(); All replies For what its worth, the canonical name Microsoft.System is not shown as supported on Win 10 according to System - Canonical name: Microsoft.System - GUID: {BB06C0E4-D293-4f75-8A90-CB05B6477EEE} - Supported OS: Windows Vista, Windows 7, Windows 8, Windows 8.1 - Module name: @%SystemRoot%\System32\systemcpl.dll,-1 Nothing on that page mentions Windows 10. When I dig the canonical names out of the registry (HKLM\...\CP\namespace\CLSID ... HKCR\CLSID\System.ApplicationName) I get all the expected ones (including "Microsoft.System"). On Windows 10, the command line "control /name Microsoft.System" works. I did some tests on Windows 10 - 1803, 17134.285 and it works normally for me (even not admin) IOpenControlPanel *pPanel; HRESULT hr = CoCreateInstance(CLSID_OpenControlPanel, NULL, CLSCTX_INPROC_SERVER, IID_IOpenControlPanel, (LPVOID*)&pPanel); if (SUCCEEDED(hr)) { // hr = pPanel->Open(L"Microsoft.ProgramsAndFeatures", NULL, NULL); hr = pPanel->Open(L"Microsoft.System", NULL, NULL); pPanel->Release(); } - Edited by Castorix31 Thursday, September 20, 2018 6:34 PM. None worked. FWIW, I'm using VS2010. Hi, >. I have tested the IOpenControlPanel::Open in vs2017 by using your code,seems that it works well.I suggest you download vs2017 and test your code. If it not work in vs2017, runing your stand-alone subsystem console test exe by running as Administrator (right-click your mouse), if it works well, then you could use LogonUser to get admin permission temporarily before you call the IOpenControlPanel::Open. Or you could try checking your applets if they were registered well or not.More details. Best Wishes, Jack Zhang MSDN Community Support&amp;amp;amp;lt;br/&amp;amp;amp;gt; Please remember to click &amp;amp;amp;amp;quot;Mark as Answer&amp;amp;amp;amp;quot; the responses that resolved your issue, and to click &amp;amp;amp;amp;quot;Unmark as Answer&amp;amp;amp;amp;quot; if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact &amp;amp;amp;lt;a href=&amp;amp;amp;quot;mailto:MSDNFSF@microsoft.com&amp;amp;amp;quot;&amp;amp;amp;gt;MSDNFSF@microsoft.com&amp;amp;amp;lt;/a&amp;amp;amp;gt; ShellExecute(Ex) works. I was mistaken about the stand-alone tests. They worked when I added CoInit it and CoUninit (doh!). I also put the code into another plugin (for JPSoft's TCC) DLL. It failed there also. I didn't mention earlier that it fails similarly if I run it in TCC elevated. Since I also unravel the CLSID/canonical name relationship, I could easily rewrite it to use ShellExecuteEx. Heck, I could even rewrite it to execute (for example) "control.exe /name Microsoft.System". But I'm really curious about why my original way which works on Windows 7 fails on Windows 10. It's quite a mystery!
https://social.msdn.microsoft.com/Forums/en-US/b2eba1b7-2e67-44b4-9b30-b93551ae3810/opening-control-panel-applet?forum=vcgeneral
CC-MAIN-2020-34
en
refinedweb
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> void *ftpc_fclose ( FILE* file); /* Pointer to the file to close. */ The ftpc_fclose function closes the file identified by the file stream pointer in the function argument. The ftpc_fclose function is in the FTPC_uif.c module. The prototype is defined in net_config.h. The ftpc_fclose function does not return any value. ftpc_fopen, ftpc_fread, ftpc_fwrite void ftpc_fclose (void *file) { /* Close a local file. */ fclose .
https://www.keil.com/support/man/docs/rlarm/rlarm_ftpc_fclose.htm
CC-MAIN-2020-34
en
refinedweb
An integer number is known as Armstrong number if the sum of cubes of its individual digits is equal to the number itself. Here we will write a program to display armstrong numbers upto 1000, if you are looking for a program to check armstrong number then refer: C++ program to check whether input number is armstrong or not. Example: Prints Armstrong numbers upto 1000 This program prints the armstrong numbers between 1 and 100. To understand this program you should have the knowledge of nested for loop. #include <cmath> using namespace std; int main(){ int sum, num; cout<<"Armstrong numbers between 1 and 1000: "; for(int i = 0; i < 10; i++) { for(int j = 0; j < 10; j++) { for(int k = 0; k < 10; k++) { num = i * 100 + j * 10 + k; sum = pow(i, 3) + pow(j, 3) + pow(k, 3); if(num == sum) cout<<num<<" "; } } } return 0; } Output: Armstrong numbers between 1 and 1000: 0 1 153 370 371 407
https://beginnersbook.com/2017/09/cpp-program-to-display-armstrong-numbers-between-1-and-1000/
CC-MAIN-2018-05
en
refinedweb
The BtoSGammaFlatEnergy class is a model of the hadronic mass is decays which produces a flat photon energy spectrum and as such is only intended for testing purposes. More... #include <BtoSGammaFlatEnergy.h> The BtoSGammaFlatEnergy class is a model of the hadronic mass is decays which produces a flat photon energy spectrum and as such is only intended for testing purposes. Definition at line 30 of file BtoSGammaFlatEnergy.h. Make a simple clone of this object. Implements ThePEG::InterfacedBase. Definition at line 69 of file BtoSGammaFlatEnergy.h. Output the setup information for the particle database. Reimplemented from Herwig::BtoSGammaHadronicMass. Make a clone of this object, possibly modifying the cloned object to make it sane. Reimplemented from ThePEG::InterfacedBase. Definition at line 75 of file BtoSGammaFlatEnergy.h. Return the hadronic mass. Implements Herwig::BtoSGammaHadronicMass. with persistent data. Definition at line 84 of file BtoSGammaFlatEnergy.h.
http://herwig.hepforge.org/doxygen/classHerwig_1_1BtoSGammaFlatEnergy.html
CC-MAIN-2018-05
en
refinedweb
View Complete Post Hi, How to increase the height of Expander control HEADER in XAML?, We. Hi?..! Hi All, Can anybody tell me how to count the Number of characters ,while entering in the RichTextBox control and have to display the no of characters in the above control. Thanks Mansoor This is my code:public class Regi : CompositeControl { private TextBox nameTextBox; [DefaultValue(typeof(Unit), "400px")] public override Unit Width { get { return base.Width; } set { Hi, I've got a problem where users with Contribute permissions or higher - even Full Control (!) - on a site collection cannot create pages. Site Collection Administrators can create pages fine. The only way I can get users to create pages is to give them all Contribute permissions to the Master Page Gallery. The Master Page Gallery, by default, does not inherit permissions from its parent site, it's default out of the box set-up is this: Approvers - SharePoint Group - Read Designers - SharePoint Group - Design Hierarchy Managers - SharePoint Group - Read Restricted Readers - SharePoint Group - Restricted Read Style Resource Readers - SharePoint Group - Read Usually having contribute permissions or even full permissions to a site allows you to create a page without having to change the Master Page Gallery's permissions from the above default - we even have site collections which have these settings and work fine. I don't want to give people unnecessary access to the Master Page Gallery - is there anything else which could be causing this issue? Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/9374-full-desired-height-richtextbox-control.aspx
CC-MAIN-2018-05
en
refinedweb
With the release of Telligent Community 10.0 just around the corner we wanted to share some details about one of the larger developments efforts in this release: Assembly Restructuring. In Telligent Community 10.0 we have re-organized the platform code so that there is a clear separation of the platform (sometimes referred to as "core") code from the applications themselves. With this change the platform code and infrastructure do not depend on any specific application and application changes will not have any impact on the platform. The applications within Community (ideation, forums, blogs, etc) will interact with the platform the same way as any custom application or custom code would - through supported APIs. With the separation of application and platform code, applications will own their own APIs. There is no longer a single API library to reference, instead you will reference the application and/or platform specific libraries and only the ones you need. In 10.0 for any new development you should take advantage of this immediately. If you had previous customizations you will not be required to change your references until the next major release. This is possible because of a process known as Type Forwarding. This is a feature of .NET that allows the runtime to redirect requests for APIs in the old locations to its new ones without the need to change any code. If however at anytime you need to recompile that custom code for any reason you will be forced to update the references in order for the code to compile correctly and update your framework usage to .NET 4.6.2. There will be more detail on how to make these updates later in the article. Remember that all custom code will be required to update its references and framework version before the next major version. Another change is how we recommend you to interact with the in-process APIs. If you have worked with the platform for a while odds are you have interacted with the global, static Telligent.Evolution.Extensibility.Api.Version1.PublicApi class. While this class was easy to find and easy to use, it was not extensible and it created unnecessary dependencies with all aspects of the product. In version 8.5, we introduced a better solution that allows APIs to be registered with the platform via a new plugin type - Telligent.Evolution.Extensibility.Api.IApiDefinition. In turn you access all registered APIs via Telligent.Evolution.Extensibility.Apis.Get<T>. This solution creates an abstraction layer for APIs that allow the management of an API to be a platform task, decoupling it from any specific application. This approach is still easy to use, provides the necessary separation of concerns and opens up the platform for developers to add their own APIs using a consistent pattern. In 10.0, PublicApi still exists for backward compatibility but it has been deprecated and we are recommending you update your code to use Apis.Get<T>. The PublicApi static class will be completely removed in the next major version. If you choose to update your code before the next major release, it is a pretty straightforward process. You are not required to do this for 10.0 unless you choose to Or you have to recompile your code. You will need to make these changes in the next major version. Q. I wrote a custom widget for my community in widget studio using Community APIs, will it continue to work? A. Yes. All of the community APIs used through widget studio continue to work as they did. Q. Telligent Currently hosts my community and the only changes I have made are some changes to existing widgets and to my theme, will my changes still work? A. Yes. Functionality provided out of the box and manipulated through the UI or administration has not changed. Q. I am a third party application developer and I integrated with the community using REST APIs (or the Community REST SDK), will my REST calls still work? A. Yes. REST API usage has not changed. Q. If I wrote a custom plugin of any type will I need to re-compile and/or change my code? A. No. If you just want to use your plugin as it was in your old version(re-deploy the dll) and it used only supported APIs then you can simply redeploy the assembly from your plugin to 10.0. If you need to make changes or recompile at anytime however, you will need to at minimum update to .NET 4.6.2 and update the references as described above. Q. With all the API moves are supported APIs still in the Telligent.Evolution.Extensibility namespaces? A. Yes. We did not change the namespaces when we moved Apis and that namespace still remains the way to identify supported Apis from code. If you are using code not in this namespace or documented as an API, you should consider this a good time to change those over to supported Apis. Unsupported APIs can be removed or changed at any time without notice. Q. Telligent told me they would not break supported APIs. Is Telligent still committed to upgrade safety and extensibility? A. Yes, absolutely. This is the reason we introduced the type forwarding concept, to maintain backwards compatibility for existing code through version 10. This way you are not forced to make changes and can plan accordingly for the next major version. If an API was deprecated in an earlier version and removed in 10 and you are still using it, you will need to update it, which would include updating .NET and references. Q. I have Telligent Community and Telligent Professional Services did a lot of work on my community, do I need to contact them before I upgrade to version 10? A. No. If you are not planning on any changes that result in a recompile you should be fine. It may not hurt however to at least ask, we are more than happy to answer any questions. Q. I wanted to add enhancements to my customization code after we upgrade, will I need to update my references? What if I need to fix a bug? A. Yes. Type forwarding is only a feature of the .NET runtime (CLR), which means it can only occur when the code is executing. It does not occur during compilation so in order for your code to build successfully, you will need to update your references and .NET version at minimum. All good stuff, with regards to the new DI code to replace the static PublicApi we had plenty of discussions on that. I have noticed one thing though. I've noticed in 9 if you use Apis.Get<> in a plugin then there is a chance that it's not initialized when the call is made. For example class Frag : IManageableApplicationType{ .... public Guid ContainerTypeId { get; } = Apis.Get<IGroups>().ContainerTypeId; /// this will blow up public Guid ContainerTypeId { get; } = PublicApi.Groups.ContainerTypeId; /// this will work } I think you need to ensure the DI initialization plugin is absolutely the first plugin to be initialized. The other question I have is does it support IoC? Apis Get is run before initialization so it is available for plugin initialization, they are not going to be available however for events before hand, which includes before init events, plugin configuration, job schedules and translations to name a few. Oops submitted too soon. I can look at the specific scenario above and see, initial thought is *should* work but I need to look deeper. Give me until next week. Also I wanted to add we have added alot of information in terms of the lifecycle of the plugins and Apis in the new 10 training docs, they will be out around the time of the release. To answer the IOC question, no. Apis.Get is not a DI/IOC framework, it is simply a way to register Apis into a common location. A factory is probably the closest equivalent. You can use a DI/IOC framework to populate your Apis via the registration process(as we do) but its not in itself capable of this.We are considering opening up our framework with an extensibility wrapper. Its still way early to commit to that since it would be beyond 10 but it is an option we are looking at that satisfies the separation of concerns I talk about above. Is the documentation for the version 10 already available? Not quite yet, we will make it available around RTM. In the meantime 9.x documentation is valid for 10.
https://community.telligent.com/blog/b/developers/posts/telligent-community-10-0-developer-announcements
CC-MAIN-2018-05
en
refinedweb
We are given a list of numbers in increasing order, but there is a missing number in the list. We will write a program to find that missing number. For example, when user enters the 5 numbers in that order: 1, 2, 3, 4, 6 then the missing number is 5. To understand this program, you should have the basic knowledge of for loop and functions. Program We are asking user to input the size of array which means the number of elements user wish to enter, after that user is asked to enter those elements in increasing order by missing any element. The program finds the missing element. The logic we are using is: Sum of n integer elements is: n(n+1)/2. Here we are missing one element which means we should replace n with n+1 so the total of elements in our case becomes: (n+1)(n+2)/2. Once we have the total, we are removing all the elements that user has entered from the total, this way the remaining value is our missing number. #include <iostream> using namespace std; int findMissingNo (int arr[], int len){ int temp; temp = ((len+1)*(len+2))/2; for (int i = 0; i<len; i++) temp -= arr[i]; return temp; } int main() { int n; cout<<"Enter the size of array: "; cin>>n; int arr[n-1]; cout<<"Enter array elements: "; for(int i=0; i<n; i++){ cin>>arr[i]; } int missingNo = findMissingNo(arr,5); cout<<"Missing Number is: "<<missingNo; return 0; } Output: Enter the size of array: 5 Enter array elements: 1 2 3 5 6 Missing Number is: 4
https://beginnersbook.com/2017/09/cpp-program-to-find-the-missing-number/
CC-MAIN-2018-05
en
refinedweb
FSRM WMI Classes The following WMI Classes are part of the File Server Resource Manager (FSRM) API: In this section - MSFT_FSRMAction Represents a FSRM action that can be triggered in response to a quota, file management job, or file screen event. - MSFT_FSRMAdr Provides interfaces to determine why access was denied to a file and to send a request for access according to policies defined on the file server. - MSFT_FSRMADRSettings Returns an object containing event settings as supplied by the server. - MSFT_FSRMAutoQuota Creates a new AutoQuota on the server with the provided configuration and returns the FSRM quota object. - MSFT_FSRMClassification Represents a classification task. - MSFT_FSRMClassificationPropertyDefinition Represents a classification property definition. - MSFT_FSRMClassificationPropertyValue Represents a classification property value. - MSFT_FSRMClassificationRule Defines a classification rule. - MSFT_FSRMEffectiveNamespace A class that represents the namespaces in scope on the server. - MSFT_FSRMFileGroup Used to define a group of files based on one or more file name patterns. - MSFT_FSRMFileManagementJob Defines a file management job. - MSFT_FSRMFileScreen Used to configure a file screen that blocks groups of files from being saved to the specified directory. - MSFT_FSRMFileScreenException Used to configure an exception that excludes the specified files from the file screening process. - MSFT_FSRMFileScreenTemplate Used to configure templates from which new file screens can be derived. - MSFT_FSRMFMJAction Represents a file management job action object. - MSFT_FSRMFMJCondition Represents the file management job conditions. - MSFT_FSRMFMJNotification Represents a file management job notification. - MSFT_FSRMFMJNotificationAction Represents a new file management job notification action. - MSFT_FSRMMacro Represents a FSRM macro object. - MSFT_FSRMMgmtProperty Represents a FSRM management property which is a classification property that includes "Folders" (2) in its MSFT_FSRMClassificationPropertyDefinition.AppliesTo property and whose MSFT_FSRMClassificationPropertyDefinition.Flags property does not include the "Secure" (4) value. - MSFT_FSRMMgmtPropertyValue Represents a FSRM management property name/value pair. - MSFT_FSRMQuota Used to define a quota for a specified directory and to retrieve use statistics. - MSFT_FSRMQuotaTemplate Used to configure templates from which new quota objects can be derived. - MSFT_FSRMQuotaThreshold Represents a quota threshold and the actions that will be taken when the threshold is reached. - MSFT_FSRMScheduledTask Represents a FSRM scheduled task object - MSFT_FSRMSettings Represents the FSRM settings. - MSFT_FSRMStorageReport Represents a storage report.
https://msdn.microsoft.com/en-us/library/windows/desktop/hh706679.aspx
CC-MAIN-2018-05
en
refinedweb
Closures, Contexts and Stateful Functions Scheme uses closures to write function factories, functions with state and software objects. newLISP uses variable expansion and namespaces called contexts to do the same. newLISP namespaces are always open to inspection. They are first class objects which can be copied and passed as parameters to built-in newLISP primitives or user-defined lambda functions. A newLISP context can contain several functions at the same time. This is used to build software modules in newLISP. Like a Scheme closure a newLISP context is a lexically closed space. In newLISP inside that namespace scoping is dynamic. newLISP allows mixing lexical and dynamic scoping in a flexible way. Function factoriesThe first is a simple example of a function factory. The function makes an adder function for a specific number to add. While Scheme uses a function closure to capture the number in a static variable, newLISP uses an expand function to create a specific lambda function containing the number as a constant: ; Scheme closure (define make-adder (lambda (n) (lambda (x) (+ x n)))) (define add3 (make-adder 3)) => #<procedure add3> (add3 10) => 13 newLISP uses either expand or letex to make n part of the lambda expression as a constant, or it uses curry: ; newLISP using expand (define (make-adder n) (expand (lambda (x) (+ x n)) 'n)) (define add3 (make-adder 3)) (add3 10) => 13 ; newLISP using letex (define (make-adder n) (letex (c n) (lambda (x) (+ x c)))) ; or letex on same symbol (define (make-adder n) (letex (n n) (lambda (x) (+ x n)))) (define add3 (make-adder 3)) (add3 10) => 13 ; newLISP using curry (define add3 (curry + 3)) (add3 10) => 13 In either case we create a lambda expression with the 3 contained as a constant. Functions with memoryThe next example uses a closure to write a generator function. It produces a different result each time it is called and remembers an internal state: ; Scheme generator (define gen (let ((acc 0)) (lambda () (set! acc (+ acc 1))))) (gen) => 1 (gen) => 2 In newLISP we create local state variables using a name-space context: ; newLISP generator (define (gen:gen) (setq gen:sum (if gen:sum (inc gen:sum) 1))) ; this could be written even shorter, because ; 'inc' treats nil as zero (define (gen:gen) (inc gen:sum)) (gen) => 1 (gen) => 2 When writing gen:gen, a context called gen is created. gen is a lexical name-space containing its own symbols used as variables and functions. In this case the name-space gen has the variables gen and sum. The first symbol gen has the same name as the parent context gen. This type of symbol is called a default functor in newLISP. When using a context name in place of a function name, then newLISP assumes the default functor. We can call our generator function using (gen). It is not necessary to call the function using (gen:gen), (gen) will default to (gen:gen). Watch the movie here. Create a function def-static to automate the process of defining lexically scoped functions. IntrospectionIn newLISP the inner state of a function can always be queried. In Scheme the state of a closure is hidden and not open to introspection without extra code: ; in Scheme states are hidden add3 #<procedure add3> gen => #<procedure gen> ; in newLISP states are visible add3 => (lambda (x) (+ x 3)) gen:sum => 2 gen:gen => (lambda () (inc gen:sum)) In Scheme lambda closure is hidden from inspection, once they are evaluated and assigned. Functions in newLISP are first class lists (define (double x) (+ x x))) (setf (nth 1 double) '(mul 2 x)) double => (lambda (x) (mul 2 x)) The first class nature of lambda expressions in newLISP make it possible to write self modifying code. Stateful functions using in-place modification ;; sum accumulator (define (sum (x 0)) (inc 0 x)) (sum 1) ;=> 1 (sum 2) ;=> 3 sum ;=> (lambda ((x 0)) (inc 3 x)) ;; self incremeneter (define (incre) (inc 0)) (incre) ;=> 1 (incre) ;=> 2 (incre) ;=> 3 incre ;=> (lambda () (inc 3) ;; make stream function with expansion closure (define (make-stream lst) (letex (stream lst) (lambda () (pop 'stream)))) (set 'lst '(a b c d e f g h)) (define mystream (make-stream lst)) (mystream) ;=> a (mystream) ;=> b (mystream) ;=> c (set 'str "abcddefgh") (define mystream (make-stream str)) (mystream) ;=> "a" (mystream) ;=> "c" Another interesting self modifying pattern was shown by Kazimir Majorinc from: The pattern called crawler-tractor will run forever without using iteration or recursion. New code to be executed is copied from old code and appended to the end of the function. Old executed code is popped of from the beginning of the function. Se also here.The pattern called crawler-tractor will run forever without using iteration or recursion. New code to be executed is copied from old code and appended to the end of the function. Old executed code is popped of from the beginning of the function. Se also here. (define (f) (begin (println (inc cnt)) (push (last f) f -1) (if (> (length f) 3) (pop f 1)))) The ability to write self modifying functions is unique to newLISP.
http://www.newlisp.org/index.cgi?Closures
CC-MAIN-2018-05
en
refinedweb
SpeechToText-WebSockets-Javascript Prerequisites Subscribe to the Speech Recognition API, and get a free trial subscription key The Speech API is part of Cognitive Services. You can get free trial subscription keys from the Cognitive Services subscription page. After you select the Speech API, select Get API Key to get the key. It returns a primary and secondary key. Both keys are tied to the same quota, so you can use either key. Note: Before you can use Speech client libraries, you must have a subscription key. In this section we will walk you through the necessary steps to load a sample HTML page. The sample is located in our github repository. You can open the sample directly from the repository, or open the sample from a local copy of the repository. Note: Some browsers block microphone access on un-secure origin. So, it is recommended to host the 'sample'/'your app' on https to get it working on all supported browsers. Open the sample directly Acquire a subscription key as described above. Then open the link to the sample. This will load the page into your default browser (Rendered using htmlPreview). Open the sample from a local copy To try the sample locally, clone this repository: git clone compile the TypeScript sources and bundle/browserfy them into a single JavaScript file (npm needs to be installed on your machine). Change into the root of the cloned repository and run the commands: cd SpeechToText-WebSockets-Javascript && npm run bundle Open samples\browser\Sample.html in your favorite browser. Next steps Installation of npm package An npm package of the Microsoft Speech Javascript Websocket SDK is available. To install the npm package run npm install microsoft-speech-browser-sdk As a Node module If you're building a node app and want to use the Speech SDK, all you need to do is add the following import statement: import * as SDK from 'microsoft-speech-browser-sdk'; and setup the recognizer: function RecognizerSetup(SDK, recognitionMode, language, format, subscriptionKey) { let recognizerConfig = new SDK.RecognizerConfig( new SDK.SpeechConfig( new SDK.Context( new SDK.OS(navigator.userAgent, "Browser", null), new SDK.Device("SpeechSample", "SpeechSample", "1.0.00000"))), recognitionMode, // SDK.RecognitionMode.Interactive (Options - Interactive/Conversation/Dictation) language, // Supported languages are specific to each recognition mode Refer to docs. format); // SDK.SpeechResultFormat.Simple (Options - Simple/Detailed) // Alternatively use SDK.CognitiveTokenAuthentication(fetchCallback, fetchOnExpiryCallback) for token auth let authentication = new SDK.CognitiveSubscriptionKeyAuthentication(subscriptionKey); return SDK.Recognizer.Create(recognizerConfig, authentication); } function RecognizerStart(SDK, recognizer) { recognizer.Recognize((event) => { /* Alternative syntax for typescript devs. if (event instanceof SDK.RecognitionTriggeredEvent) */ switch (event.Name) { case "RecognitionTriggeredEvent" : UpdateStatus("Initializing"); break; case "ListeningStartedEvent" : UpdateStatus("Listening"); break; case "RecognitionStartedEvent" : UpdateStatus("Listening_Recognizing"); break; case "SpeechStartDetectedEvent" : UpdateStatus("Listening_DetectedSpeech_Recognizing"); console.log(JSON.stringify(event.Result)); // check console for other information in result break; case "SpeechHypothesisEvent" : UpdateRecognizedHypothesis(event.Result.Text); console.log(JSON.stringify(event.Result)); // check console for other information in result break; case "SpeechFragmentEvent" : UpdateRecognizedHypothesis(event.Result.Text); console.log(JSON.stringify(event.Result)); // check console for other information in result break; case "SpeechEndDetectedEvent" : OnSpeechEndDetected(); UpdateStatus("Processing_Adding_Final_Touches"); console.log(JSON.stringify(event.Result)); // check console for other information in result break; case "SpeechSimplePhraseEvent" : UpdateRecognizedPhrase(JSON.stringify(event.Result, null, 3)); break; case "SpeechDetailedPhraseEvent" : UpdateRecognizedPhrase(JSON.stringify(event.Result, null, 3)); break; case "RecognitionEndedEvent" : OnComplete(); UpdateStatus("Idle"); console.log(JSON.stringify(event)); // Debug information break; } }) .On(() => { // The request succeeded. Nothing to do here. }, (error) => { console.error(error); }); } function RecognizerStop(SDK, recognizer) { // recognizer.AudioSource.Detach(audioNodeId) can be also used here. (audioNodeId is part of ListeningStartedEvent) recognizer.AudioSource.TurnOff(); } In a Browser, using Webpack Currently, the TypeScript code in this SDK is compiled using the default module system (CommonJS), which means that the compilation produces a number of distinct JS source files. To make the SDK usable in a browser, it first needs to be "browserified" (all the javascript sources need to be glued together). Towards this end, this is what you need to do: Add requirestatement to you web app source file, for instance (take a look at sample_app.js): var SDK = require('<path_to_speech_SDK>/Speech.Browser.Sdk.js'); Setup the recognizer, same as above. Run your web-app through the webpack (see "bundle" task in gulpfile.js, to execute it, run npm run bundle). Add the generated bundle to your html page: <script src="../../distrib/speech.sdk.bundle.js"></script> In a Browser, as a native ES6 module ...in progress, will be available soon Token-based authentication To use token-based authentication, please launch a local node server, as described here Docs The SDK is a reference implementation for the speech websocket protocol. Check the API reference and Websocket protocol reference for more details. Browser support The SDK depends on WebRTC APIs to get access to the microphone and read the audio stream. Most of todays browsers(Edge/Chrome/Firefox) support this. For more details about supported browsers refer to navigator.getUserMedia#BrowserCompatibility Note: The SDK currently depends on navigator.getUserMedia API. However this API is in process of being dropped as browsers are moving towards newer MediaDevices.getUserMedia instead. The SDK will add support to the newer API soon. Contributing This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
https://azure.microsoft.com/pl-pl/resources/samples/speechtotext-websockets-javascript/
CC-MAIN-2018-05
en
refinedweb
/* -*- . * Portions created by the Initial Developer are Copyright (C) 2006 * the Initial Developer. All Rights Reserved. * * Contributor(s): * Robert O'Callahan <robert@ocallahanLINEBREAKER_H_ #define NSLINEBREAKER_H_ #include "nsString.h" #include "nsTArray.h" #include "nsILineBreaker.h" class nsIAtom; /** * A receiver of line break data. */ 00050 class nsILineBreakSink { public: /** * Sets the break data for a substring of the associated text chunk. * One or more of these calls will be performed; the union of all substrings * will cover the entire text chunk. Substrings may overlap (i.e., we may * set the break-before state of a character more than once). * @param aBreakBefore the break-before states for the characters in the substring. */ virtual void SetBreaks(PRUint32 aStart, PRUint32 aLength, PRPackedBool* aBreakBefore) = 0; /** * Indicates which characters should be capitalized. Only called if * BREAK_NEED_CAPITALIZATION was requested. */ virtual void SetCapitalization(PRUint32 aStart, PRUint32 aLength, PRPackedBool* aCapitalize) = 0; }; /** * A line-breaking state machine. You feed text into it via AppendText calls * and it computes the possible line breaks. Because break decisions can * require a lot of context, the breaks for a piece of text are sometimes not * known until later text has been seen (or all text ends). So breaks are * returned via a call to SetBreaks on the nsILineBreakSink object passed * with each text chunk, which might happen during the corresponding AppendText * call, or might happen during a later AppendText call or even a Reset() * call. * * The linebreak results MUST NOT depend on how the text is broken up * into AppendText calls. * * The current strategy is that we break the overall text into * whitespace-delimited "words". Then(u); } static inline PRBool IsComplexASCIIChar(PRUnichar u) { return !((0x0030 <= u && u <= 0x0039) || (0x0041 <= u && u <= 0x005A) || (0x0061 <= u && u <= 0x007A)); } static inline PRBool IsComplexChar(PRUnichar u) { return IsComplexASCIIChar(u) || NS_NeedsPlatformNativeHandling(u) || (0x1100 <= u && u <= 0x11ff) || // Hangul Jamo (0x2000 <= u && u <= 0x21ff) || // Punctuations and Symbols (0x2e80 <= u && u <= 0xd7ff) || // several CJK blocks (0xf900 <= u && u <= 0xfaff) || // CJK Compatibility Idographs (0xff00 <= u && u <= 0xffef); // Halfwidth and Fullwidth Forms } // Break opportunities exist at the end of each run of breakable whitespace // (see IsSpace above). Break opportunities can also exist between pairs of // non-whitespace characters, as determined by nsILineBreaker. We pass a whitespace- // delimited word to nsILineBreaker if it contains at least one character // matching IsComplexChar. // We provide flags to control on a per-chunk basis where breaks are allowed. // At any character boundary, exactly one text chunk governs whether a // break is allowed at that boundary. // // We operate on text after whitespace processing has been applied, so // other characters (e.g. tabs and newlines) may have been converted to // spaces. /** *); /** * Feed Unicode text into the linebreaker for analysis. aLength must be * nonzero. * @param aSink can be null if the breaks are not actually needed (we may * still be setting up state for later breaks) */ nsresult AppendText(nsIAtom* aLangGroup, const PRUnichar* aText, PRUint32 aLength, PRUint32 aFlags, nsILineBreakSink* aSink); /** * Feed 8-bit text into the linebreaker for analysis. aLength must be nonzero. * @param aSink can be null if the breaks are not actually needed (we may * still be setting up state for later breaks) */ nsresult AppendText(nsIAtom* aLangGroup, const PRUint8* aText, PRUint32 aLength, PRUint32 aFlags, nsILineBreakSink* aSink); /** * Reset all state. This means the current run has ended; any outstanding * calls through nsILineBreakSink are made, and all outstanding references to * nsILineBreakSink objects are dropped. * After this call, this linebreaker can be reused. * This must be called at least once between any call to AppendText() and * destroying the object. * @param aTrailingBreak this is set to true when there is a break opportunity * at the end of the text. This will normally only be declared true when there * is breakable whitespace at the end. */ nsresult Reset(PRBool* aTrailingBreak); private: // This is a list of text sources that make up the "current word" (i.e., // run of text which does not contain any whitespace). All the mLengths // are are nonzero, these cannot overlap. struct TextItem { TextItem(nsILineBreakSink* aSink, PRUint32 aSinkOffset, PRUint32 aLength, PRUint32 aFlags) : mSink(aSink), mSinkOffset(aSinkOffset), mLength(aLength), mFlags(aFlags) {} nsILineBreakSink* mSink; PRUint32 mSinkOffset; PRUint32 mLength; PRUint32 mFlags; }; // State for the nonwhitespace "word" that started in previous text and hasn't // finished yet. // When the current word ends, this computes the linebreak opportunities // *inside* the word (excluding either end) and sets them through the // appropriate sink(s). Then we clear the current word state. nsresult FlushCurrentWord(); nsAutoTArray<PRUnichar,100> mCurrentWord; // All the items that contribute to mCurrentWord nsAutoTArray<TextItem,2> mTextItems; PRPackedBool mCurrentWordContainsComplexChar; // True if the previous character was breakable whitespace PRPackedBool mAfterBreakableSpace; // True if a break must be allowed at the current position because // a run of breakable whitespace ends here PRPackedBool mBreakHere; }; #endif /*NSLINEBREAKER_H_*/
http://xulrunner.sourcearchive.com/documentation/1.9.1.9-3/nsLineBreaker_8h-source.html
CC-MAIN-2018-05
en
refinedweb
In this document - Defining Styles - Applying Styles and Themes to the UI See also - Style and Theme Resources R.stylefor Android styles and themes R.attrfor all style attributes A style is a collection of attributes that specify the look and format for a View or window. A style can specify attributes such as height, padding, font color, font size, background color, and much more. A style is defined in an XML resource that is separate from the XML that specifies the layout. For example, by using a style, you can take this layout XML: <TextView android: And turn it into this: <TextView android: The attributes related to style have been removed from the layout XML and put into a style definition called CodeFont, which is then applied using the android:textAppearance attribute. The definition for this style is covered in the following section. Styles in Android share a similar philosophy to cascading stylesheets in web design—they allow you to separate the design from the content. A theme is a style applied to an entire Activity or app, rather than an individual View, as in the example above. When a style is applied as a theme, every view in the activity or app applies each style attribute that it supports. For example, if you apply the same CodeFont style as a theme for an activity, then all text inside that activity appears in a green monospace font. Defining Styles To create a set of styles, save an XML file in the res/values/ directory of your project. The name of the XML file must use the .xml extension, and like other resources, it must use lowercase, underscores, and be saved in the res/values/ folder. The root node of the XML file must be <resources>. For each style you want to create, complete the following series of steps: - Add a <style>element to the file, with a namethat uniquely identifies the style. - For each attribute of that style, add an <item>element, with a namethat declares the style attribute. The order of these elements doesn't matter. - Add an appropriate value to each <item>element. Depending on the attribute, you can use values with the following resource types in an <item> element: - Fraction - Float - Boolean - Color - String - Dimension - Integer You can also use values with a number of special types in an <item> element. The following list of special types are unique to style attributes: - Flags that allow you to perform bitwise operations on a value. - Enumerators consisting of a set of integers. - References which are used to point to another resource. For example, you can specify the particular value for an android:textColor attribute—in this case a hexadecimal color—or you can specify a reference to a color resource so that you can manage it centrally along with other colors. The following example illustrates using hexadecimal color values in a number of attributes: <resources> <style name="AppTheme" parent="Theme.Material"> <item name="colorPrimary">#673AB7</item> <item name="colorPrimaryDark">#512DA8</item> <item name="colorAccent">#FF4081</item> </style> </resources> And the following example illustrates specifying values for the same attribute using references: <resources> <style name="AppTheme" parent="Theme.Material"> <item name="colorPrimary">@color/primary</item> <item name="colorPrimaryDark">@color/primary_dark</item> <item name="colorAccent">@color/accent</item> </style> </resources> You can find information on which resource types can be used with which attributes in the attribute reference, R.attr. For more information on centrally managing resources, see Providing Resources. For more information on working with color resources, see More Resource Types. Here's another example file with a single style: <?xml version="1.0" encoding="utf-8"?> <resources> <style name="CodeFont" parent="@android:style/TextAppearance.Medium"> <item name="android:textColor">#00FF00</item> <item name="android:typeface">monospace</item> </style> </resources> This example style can be referenced from an XML layout as @style/CodeFont (as demonstrated in the introduction above). The parent in the <style> element is optional and specifies the resource ID of another style from which this style should inherit attributes. You can then override the inherited style attributes. A style that you want to use as an activity or app theme is defined in XML exactly the same as a style for a view. How to apply a style as an app theme is discussed in Apply a theme to an activity or app. Inheritance The parent attribute in the <style> element lets you specify a style from which your style should inherit attributes. You can use this to inherit attributes from an existing style and define only the attributes that you want to change or add. You can inherit from styles that you've created yourself or from styles that are built into the platform. For example, you can inherit the Android platform's default text appearance and modify it: <style name="GreenText" parent="@android:style/TextAppearance"> <item name="android:textColor">#00FF00</item> </style> For information about inheriting from styles defined by the Android platform, see Using Platform Styles and Themes. If you want to inherit from styles that you've defined yourself, you don't have to use the parent. Instead, you can use dot notation by prefixing the name of the style you want to inherit to the name of your new style, separated by a period. For example, to create a new style that inherits the CodeFont style defined above, but make the color red, you can create the new style like this: <style name="CodeFont.Red"> <item name="android:textColor">#FF0000</item> </style> Notice that there is no parent in the <style> tag, but because the name begins with the CodeFont style name, this new style inherits all style attributes from the CodeFont style. The new style then overrides the android:textColor attribute to make the text red. You can reference this new style as @style/CodeFont.Red. You can continue inheriting styles like this as many times as you'd like by chaining names with periods. For example, you can extend CodeFont.Red to be bigger, with: <style name="CodeFont.Red.Big"> <item name="android:textSize">30sp</item> </style> This style inherits from the CodeFont.Red style, that itself inherits from the CodeFont style, then adds the android:textSize attribute. Note: This technique for inheritance by chaining together names only works for styles defined by your own resources. You can't inherit Android built-in styles this way, as they reside in a different namespace to that used by your resources. To reference a built-in style, such as TextAppearance, you must use the parent attribute. Style Attributes Now that you understand how a style is defined, you need to learn what kind of style attributes—defined by the <item> element—are available. The attributes that apply to a specific View are listed in the corresponding class reference, under="flag" ... /> You can instead create a style for the EditText element that includes this property: <style name="Numbers"> <item name="android:inputType">number</item> ... </style> So your XML for the layout can now implement this style: <EditText style="@style/Numbers" ... /> For a reference of all style attributes available in the Android Framework, see the R.attr reference. For the list of all available style attributes available in a particular package of the support library, see the corresponding R.attr reference. For example, for the list of style attributes available in the support-v7 package, see the R.attr reference for that package. Keep in mind that not all view objects accept all the same style attributes, so you should normally refer to the specific subclass of View for a list of the supported style attributes. However, if you apply a style to a view that doesn't support all of the style attributes, the view applies only those attributes that are supported and ignores the others. As the number of available style attributes is large, you might find it useful to relate the attributes to some broad categories. The following list includes some of the most common categories: - Default widget styles, such as android:editTextStyle - Color values, such as android:textColorPrimary - Text appearance styles, such as android:textAppearanceSmall - Drawables, such as android:selectableItemBackground You can use some style attributes to set the theme applied to a component that is based on the current theme. For example, you can use the android:datePickerDialogTheme attribute to set the theme for dialogs spawned from your current theme. To discover more of this kind of style attribute, look at the R.attr reference for attributes that end with Theme. Some style attributes, however, are not supported by any view element and can only be applied as a theme. These style attributes apply to the entire window and not to any type of view. For example, style attributes for a theme can hide the app title, hide the status bar, or change the window's background. These kinds of style attributes don't belong to any view object.To discover these theme-only style attributes, look at the R.attr reference for attributes begin with window. For instance, windowNoTitle and windowBackground are style attributes that are effective only when the style is applied as a theme to an activity or app. See the next section for information about applying a style as a theme. Note: Don't forget to prefix the property names in each <item> element with the android: namespace. For example: <item name="android:inputType">. You can also create custom style attributes for your app. Custom attributes, however, belong to a different namespace. For more information about creating custom attributes, see Creating a Custom View Class. Applying Styles and Themes to the UI There are several ways to set a style: - To an individual view, by adding the styleattribute to a View element in the XML for your layout. - To an individual view, by passing the style resource identifier to a Viewconstructor. This is available for apps that target Android 5.0 (API level 21) or higher. - Or, to an entire activity or app, by adding the android:themeattribute to the <activity>or <application>element in the Android manifest. When you apply a style to a single View in the layout, the attributes defined by the style are applied only to that View. If a style is applied to a ViewGroup, the child View elements don't inherit the style attributes; only the element to which you directly apply the style applies its attributes. However, you can apply a style so that it applies to all View elements—by applying the style as a theme. To apply a style definition as a theme, you must apply the style to an Activity or app in the Android manifest. When you do so, every View within the activity or app applies each attribute that it supports. For example, if you apply the CodeFont style from the previous examples to an activity, then the style is applied to all view elements that support the text style attributes that the style defines. Any view that doesn't support the attributes ignores them. If a view supports only some of the attributes, then it applies only those attributes. Apply a style to a view Here's how to set a style for a view in the XML layout: <TextView style="@style/CodeFont" android: Now this TextView uses the style named CodeFont. (See the sample above, in Defining Styles.) Note: The style attribute doesn't use the android: namespace prefix. Every framework and Support Library widget has a default style that is applied to it. Many widgets also have alternative styles available that you can apply using the style attribute. For example, by default, an instance of ProgressBar is styled using Widget.ProgressBar. The following alternative styles can be applied to ProgressBar: Widget.ProgressBar.Horizontal Widget.ProgressBar.Inverse Widget.ProgressBar.Large Widget.ProgressBar.Large.Inverse Widget.Progress.Small Widget.ProgressBar.Small.Inverse To apply the Widget.Progress.Small style to a progress bar, you should supply the name of the style attribute as in the following example: <ProressBar android: To discover all of the alternative widget styles available, look at the R.style reference for constants that begin with Widget. To discover all of the alternative widget styles available for a support library package, look at the R.style reference for fields that begin with Widget. For example, to view the widget styles available in the support-v7 package, see the R.style reference for that package. Remember to replace all underscores with periods when copying style names from the reference. Apply a theme to an activity or app To set a theme for all the activities of your app, open the AndroidManifest.xml file and edit the <application> tag to include the android:theme attribute with the style name. For example: <application android: If you want a theme applied to just one activity in your app,.Material.Light"> <item name="android:windowBackground">@color/custom_theme_color</item> <item name="android:colorBackground">@color/custom_theme_color</item> </style> Note: The color needs to be supplied as a separate resource here because the android:windowBackground attribute only supports a reference to another resource; unlike android:colorBackground, it can't be given a color literal.) Now use CustomTheme instead of Theme.Light inside the Android manifest: <activity android: To discover the themes available in the Android Framework, search the R.style reference for constants that begin with Theme_. You can further adjust the look and format of your app by using a theme overlay. Theme overlays allow you to override some of the style attributes applied to a subset of the components styled by a theme. For example, you might want to apply a darker theme to a toolbar in an activity that uses a lighter theme. If you are using Theme.Material.Light as the theme for an activity, you can apply ThemeOverlay.Material.Dark to the toolbar using the android:theme attribute to modify the appearance as follows: - Change the toolbar colors to a dark theme but preserve other style attributes, such as those relating to size. - The theme overlay applies to any children inflated under the toolbar. You can find a list of ThemeOverlays in the Android Framework by searching the R.style reference for constants that begin with ThemeOverlay_. Maintaining theme compatibility To maintain theme compatibility with previous versions of Android, use one of the themes available in the appcompat-v7 library. You can apply a theme to your app or activity, or you can set it as the parent, when creating your own backwards compatible themes. You can also use backwards compatible theme overlays in the Support Library. To find a list of the available themes and theme overlays, search the R.style reference in the android.support.v7.appcompat package for fields that being with Theme_ and ThemeOverlay_respectively. Newer versions of Android have additional themes available to apps, and in some cases you might want to use these themes while still being compatible with older versions. You can accomplish this through a custom theme that uses resource selection to switch between different parent themes based on the platform version. For example, here is the declaration for a custom theme. It would go in an XML file under res/values (typically res/values/styles.xml): <style name="LightThemeSelector" parent="android:Theme.Light"> ... </style> To have this theme use the material theme when the app is running on Android 5.0 (API Level 21) or higher, you can place an alternative declaration for the theme in an XML file in res/values-v21, but make the parent theme the material theme: <style name="LightThemeSelector" parent="android:Theme.Material.Light"> ... </style> Now use this theme like you would any other, and your app automatically switches to the material theme if it's running on Android 5.0 or higher. A list of the standard attributes that you can use in themes can be found at R.styleable.Theme. For more information about providing alternative resources, such as themes and layouts that are based on the platform version or other device configurations, see the Providing Resources document.
https://developer.android.com/guide/topics/ui/look-and-feel/themes.html?hl=ja
CC-MAIN-2018-05
en
refinedweb
KylotanModerator Content count14138 Joined Last visited Days Won47 Community Reputation10165 Excellent About Kylotan - RankModerator - Scripting Languages and Game Mods Personal Information - Website - InterestsAudio Design Programming Social - Steamkylotan Are character artists are higher skilled than Environment artists? Kylotan replied to JustASpaceFiller's topic in 2D and 3D ArtNo. There are different skills involved but it's simply false to suggest that one type of artist is 'higher skilled' than another based purely on their title. AI Duplication Issues Kylotan replied to vex1202's topic in Artificial IntelligenceIt's not clear what all your public variables are, so it's hard to be sure. If 'enemy' was the same character as the one doing the AI, its easy to see how the last line of navigation would actually do nothing, for example. Use Debug.Log lines to see which code branches are being executed, and which values the code has at that time. C# Help with my first C# Hello World! Kylotan replied to TexasJack's topic in General and Gameplay ProgrammingSometimes it can be counterproductive to learn C# separately from Unity because the way you structure a program in Unity is very idiomatic. So bear that in mind. I am not sure what the differences between 'running' and 'building' code - Building code prepares it to be run. Running code actually executes the program. Often a button to 'Run' will check to see if it needs building, and will build it if necessary, and then run it. If you are seeing your program's output, it's running. There are several bits of jargon that keep being thrown around, without much in the way of explanation: 'Classes', 'The Main Method', 'Namespace'. - These are big, complex concepts and you're not going to get explanation from tool-tips or whatever. This is a whole programming language with books dedicated to it. If you don't want to get a book your best bet is to get used to using the online references, for example here: In the meantime: class - a grouping of data with the functions that operate on that data. Objects in your program are usually instances of a class. Each class might have zero, one, or many instances. main method - the function that gets called at the very start of your program, and that then calls other functions to get everything done. namespace - a way to group classes together under a related name for ease of use and to avoid naming clashes. line-by-line explanation - why did you write it if there wasn't an explanation for it? Try finding a tutorial that actually explains what it is telling you do to. Don't get into the habit of typing code from the internet unless someone is explaining what it does. "using system" - this gives you access to a bunch of pre-defined objects in the 'system' namespace. In this case, the only one you're using is 'console' "namespace helloworldtwo" - this puts everything inside the next pair of braces into the 'helloworldtwo' namespace. In this code, this actually does nothing special, but if code in other files wanted to access the class or function in here, it would need to prefix it with "helloworldtwo", or use a "using" statement like you did with 'system'. "class Mainclass" - this denotes the start of a class called 'Mainclass'. The name is arbitrary. A class just groups data and functions (functions are usually called 'methods' when in a class), but in this case there is no data, so it's just holding the function. "public static void Main (string[] args) - " this starts to define a function. The function is public, meaning any part of your program can access it, static, meaning you don't need a specific instance of that class to use the function, it returns 'void', which is a fancy way of saying 'nothing', and it accepts an array of strings as an argument, and calls them 'args'. Note however that your function currently ignores those strings. A function is a block of code which takes some arbitrary input, does some stuff with it, and then returns some arbitrary output. In mathematics, an example of a very simple function is "square-root". If you pass 25 into a square root function, you get 5 out. And so on. In programming you can also have statements that cause side-effects - like Console.WriteLine - so your functions can do more than just return values, and this makes it practical to build entire programs out of functions. "Console.WriteLine ("Hello World!")" - you already figured this bit out. - The point of this sort of system is that a state update takes some length of time (say, 'M') to reach the client from the server, and then some length of time (say, 'N') before the client fully reflects that change, to help ensure the client has received another state update from the server before the current one is reached. You can adjust N slightly on a per-message basis to account for fluctuations in M. In other words, if a message arrives sooner than expected, you might want to take longer to interpolate towards it, and vice versa. This keeps movement smooth. It is reasonable to consider decreasing N during play so that it's not too long and imposing unnecessary client-side latency. This will be determined by M and your send rate. It's also reasonable consider increasing N during play so that it's not too short and leaving entities stuttering around, paused in the time gaps between reaching their previous snapshot position and receiving the next snapshot from the server. This is determined by the variance of M (jitter) and your send rate. Often N is left at a fixed value (perhaps set in the configuration), picked by developers as a tradeoff between the amount of network jitter they expect to contend with, and the degree of responsiveness the players need to have. And if you're happy occasionally extrapolating instead of just interpolating, i.e. you are happy to take less accuracy for less latency, then you can reduce N down even further. The idea to "catch up earlier" doesn't make sense in isolation. Earlier than what? The idea is that you always allow yourself a certain amount of time to interpolate smoothly towards the next snapshot because you still want to be interpolating when the subsequent one comes in. You don't want to be decreasing that delay over time because of the stuttering problem above, unless you have sufficient information to be able to do so. C++ Savegame headers? Kylotan replied to suliman's topic in General and Gameplay ProgrammingThe first way makes a lot more sense. I would like to hear more about your day-to-day experience at Game Developer / Designer jobs. Kylotan replied to Alexander Kirchner's topic in Games Career DevelopmentIf you want to self-teach yourself game programming, be aware that this is likely to take quite a while before you are at an employable level. Also be aware that the work done by designers and the work done by programmers is very different. Then the games industry is probably not for you. On more than one project I've worked on, I've been assigned tasks where that consisted of a single sentence like "Implement dialogue windows" or "Fix NPC animation" with no indication of what that truly means. The implication is that you can work it out for yourself, if you dig into it and speak to enough people. If you're really lucky there's something written about it somewhere (most likely an email thread you weren't included on, 3 months ago.) Then what inevitably happens is do you do the work, commit the work, and someone says "no, I didn't mean like that. I wanted it more like...<whatever>" Repeat until shipping day. As for tedious detail work... sorry, there's plenty of that. Type just 1 letter wrong in your code? It won't build. Or will crash. Set one value wrong on a material? It'll look wrong, and you'll have to dig through to find and fix it. Game development is complex, detailed business. You're not going to find this in the games industry, sorry. You don't get to be the ideas person without spending years in the "finalize everything" trenches. Why would we trust someone to come up with ideas if they have no appreciation of how they would be implemented? Onto your questions (though I feel the answers are now irrelevant): In your daily work life, do you feel like you are being stimulated with new task frequently, or do you mainly work on similar tasks? Depends entirely on your viewpoint. Some people in other industries are amazed that I have to work on the same project, inside the same codebase, for years on end. Every day I am "writing code", so that's the same task. Sometimes it's a more interesting feature, sometimes it is not. Do you feel like you have the possibility to innovate and bring in new ideas in your job / task? Yes, but that's because I'm a senior member of a small team. As a junior member getting into the industry your expectations have to be much lower. Do you feel that, for the most part, your position has clearly described activities, or are you mainly taking on various roles that are currently needed? Again, this is a matter of perspective. My job is 98% programming; so that is 'clearly described'. Is what I have to deliver clearly described? No, not at all. Do you get a lot of feedback on performance or progress? Or is your work mainly done 'when its done'? Both. Most developers are expected to take responsibility for monitoring their own progress and delivering tasks on time, but there will be feedback on what is delivered, at some stage. - No, because that's not necessary, nor does it make sense. If the object is moving, then your local interpolated data will continually be following those positions, N milliseconds in arrears, where N is the length of the buffer you chose. If the object is not moving, then your local version will catch up to the final position within N milliseconds. There is no need to change the speed of anything. - If you were seeing discontinuities in the position, then you weren't treating the currently interpolated position as the new start position, or you weren't resetting the interpolated time at that point. If you were seeing the object continually stop and start, then your updates were arriving too late to ensure that there was always a future position queued up. One good way to debug this sort of issue is to slow everything down - drop the network send rate to something like once per second, set your update rate at something like 20 or 30 frames per second, scale up all other time values accordingly, and do a lot of logging to see when things change in ways you don't expect. Get it working correctly at the slower speeds, where it's easier to debug any issues. Then ramp up the speed to what you actually need. Also, ensure you understand how to interpolate properly. If your interpolation looks like Lerp(start, end, 0.5) (or any other constant) then that is not truly interpolating linearly between the values over time. If this isn't enough information to help you, then you will probably have to show your snapshot handling and interpolation code. Unreal Memory Allocation Kylotan replied to SillyCow's topic in Engines and MiddlewareI don't think you should expect to get that sort of low level access to the engine's memory management. I suspect that you will either need to use a different JPEG decoder, or find a different way to achieve your aims here (e.g. decompress the images ahead of time, use a video codec, etc) - It's not clear why you didn't try the simplest solution to the original problem: when a new future position comes in from the server, you lerp directly towards that from wherever the client representation is now. In other words, the currently lerped position becomes the 'now, T=0' position and the received position becomes the 'later, T=N' position, where N is some number of milliseconds. The reasoning for using a time in the future is because of the following: if you instantly snap to whatever position the server reports then it's obviously going to look very jerky - so you want to smooth that over some period of time. (Let's call that a 'time buffer' for the purposes of this post.) You interpolate between a start position and a next position. that time buffer effectively adds more latency, because your client has that extra delay before it moves the entity where it needs to be. So you want the buffer to be as short as possible to reduce this. On the other hand, you always want to have a 'next position' queued up in the future or the entity will stop moving while it waits for the next snapshot to come in. (If you think about it, the 'instant snap' situation can be considered a special case where the interpolation is over 0 milliseconds.) So, you choose a time buffer length that is short enough to not add too much extra latency, but long enough that you always have a next position from the server queued up, even in the face of varying message ping/latency (known as 'jitter') Regarding Pong - I think this method would be fine. You have very few entities to update and therefore very low bandwidth requirements, so I'd just suggest sending messages as frequently as possible. - The pros and cons are exactly what you'd think they are. e.g. If you have a web interface, the pros are that you get to use your own browser, the cons are that you need to do everything via HTTP and HTML. If you have a console interface, the pros are that it's very simple text in, text out. The cons are that it's just text in, text out. You need to think about what you want from the tool - again, depending entirely on what type of server you're talking about, where you're hosting it, what kind of people need to query it, and decide on the interface you need in order to meet those requirements. - Yes, you do typically need some way to query the server. What you need depends on what type of server you're talking about, where you're hosting it, what kind of people need to query it, etc. You could create a specialised client, you could have a console style login (e.g. with SSH), you might have a web interface, etc. Accounting for lost packets? Kylotan replied to Substance12's topic in Networking and MultiplayerJust to add to what was said above: The 100ms server-side delay seems like the wrong thing to do. Just apply the data when it arrives. If you ever receive a message that is out of order - i.e. you have already handled message 10, but now message 9 arrives - just drop it. A client-side delay however is a useful tool to reduce the effects of varying transmission speeds (aka "jitter"). The idea is usually to treat any received data as applying to some future time, so that you will always have 1 or 2 future states to blend smoothly towards. Note that a fixed number of milliseconds after receipt is probably less optimal than a varying time after receipt, where the variance takes transmission time into account. If each message is stamped with the server's sending time, you can get an idea of which messages are arriving 'early' and which are arriving 'late', and also get a feel for how big the delay needs to be in order to cover this variation. If you're sending snapshots, and sending them less often than you collect them, bear in mind there's no point sending the earlier ones - their information is superceded by the newer data. 50 milliseconds doesn't mean '30 tickrate' unless someone changed the duration of a second recently. Unreal Learning Unreal C++ development Kylotan replied to SillyCow's topic in Engines and MiddlewareWhen working with UE4, you practically have to use Blueprints. They're not an alternative to code, they're the backbone of the system. Even if you went the hard route and made all your logic in code, you would still almost certainly be interfacing with the Blueprint system for object instantiation, not to mention all the other parts of the visual scripting system. UE4 is not a code framework with some free graphical tools thrown in - it's a full development environment with hooks for you to add code, with C++ almost considered a scripting language. So unfortunately my answer to you is to get used to Blueprints. You can still work from examples but instead of copy and pasting you will need to re-create example Blueprints. This is actually very quick to do given that each node can be created in about 10 seconds and joining 2 nodes together takes under a second. Once you're more comfortable editing Blueprints, you could then practise extending the system - perhaps create a new component and add it to an actor, giving it some UPROPERTY values so you can set them in the editor. Then create a Blueprint-exposed function, and then call that function from within a Blueprint, e.g. from an Event. Maybe give the component a Tick and have it perform some game-loop processing, altering other components. Etc. Once a programmer is very familiar with the Unreal object model, when events are called, and how each part of the system interacts with other parts, it's possible to start converting a lot of the logic away from Blueprints and into code - but nobody is expected to start with a 'code only' approach. i need learn more about the Game Loop Kylotan replied to cambalinho's topic in For BeginnersJust to take this back to a more fundamental level... At a very basic level, for pretty much any task, computers work like this: Collect input -> Process data based on input -> Display output Lots of tasks require - or at least benefit from - repeating this process so that new input can be processed, and perhaps so the user can view the output and provide different input based on it. Collect input -> Process data based on input -> Display output -> Repeat from start Real-time systems like computer games and simulations work this way, with the additional constraint that they have some sort of hard or soft 'deadline'. In a computer game, the deadlines are typically 'soft' (in that the program doesn't break entirely if they are missed) but they are quite short, e.g. 33ms for a 30fps game or 16ms for a 60fps game. So the loop is executed with this deadline in mind. Note that on personal computers it's impractical to guarantee that each loop iteration takes a precise amount of time, so you normally aim for an arbitrary deadline but be prepared to measure the actual time taken and process with that in mind instead. Collect input for next 16ms -> Process data based on input to cover the next 16ms -> Display output for the next 16ms-> Repeat from start (16ms can be swapped for any other small value, constant or variable) Each of these iterations is generally called a 'frame' because you get one iteration for every one frame rendered to the screen (generally). So, the way you would make an enemy move, in this basic system, is to recognise that a moved enemy is a type of data processing (i.e. the position data changes), and that the movement covers a certain time span (how far can an enemy move in 16ms? Or however long your frame took?) Each time though the loop, you move the enemy that tiny amount, then display the new position on the screen, and repeat.
https://www.gamedev.net/profile/2996-kylotan/?tab=classifieds
CC-MAIN-2018-05
en
refinedweb
New in version 2.1.8October 14th, 2014 - New Features: - Adding the new "hola" API - hamsterdb analytical functions for COUNT, SUM, AVERAGE etc. See ham/hamsterdb_ola.h for the declarations - Added new API ham_cursor_get_duplicate_position - A new Python API was added - Bugfixes: - issue #33: upgraded to libuv 0.11.22 - Fixing a performance regression in 2.1.7 - large fixed-length keys created too many page splits, even if they were stored as extended keys - Other Changes: - The database format no longer tries to be endian agnostic; the database is now stored in host endian format. The endian agnostic code was broken anyway, and I had no hardware to test it. - ham_db_get_error is now deprecated - header files no longer include winsock.h to avoid conflicts with winsock2.h on Windows platforms - Both btree layouts have been completely rewritten; PAX KeyLists can now be used in combination with duplicate RecordLists, and variable length KeyLists can now be used in combination with PAX RecordLists - Avoiding Btree splits if keys are appended (HAM_HINT_APPEND) - The internal communication with the remote server now uses a different protocol which is faster than google's protobuffer - PAX layout now uses linear search for small ranges; this improves search performance by 5-10% - Removed the ham_get_license API (and serial.h) New in version 2.1.5 (February 14th, 2014) - This release fixes several bugs and improves performance. Also, hamsterdb now scales much better if the file size grows beyond several gigabytes. New in version 2.1.4 (January 10th, 2014) - This release adds custom Btree layouts for variable length keys and duplicate keys. Also, small records are now stored directly in the Btree leaf node, instead of an external blob. New in version 2.0.5 (December 3rd, 2012) - This version fixes a few minor bugs, has a few performance improvements, and fixes a segmentation fault in the .NET API. - The internal C++ implementation has been moved into namespace “ham” to avoid conflicts with other symbols. - Please check the README for upcoming API changes in the next release. New in version 2.0.3 (June 26th, 2012) - This version fixes several bugs and adds support for Microsoft's Visual Studio 2010. - The legacy file format of hamsterdb 1.0.9 and older is no longer supported. - Sources and precompiled libraries for Win32 (x86 and x64) are available for download. New in version 2.0.2 (April 28th, 2012) - This version makes hamsterdb thread-safe. - A bug in the freelist was fixed. - Boost is now required. - Sources and pre-compiled win32/win64 libraries are available for download. New in version 2.0.1 (February 20th, 2012) - This version adds a few minor features like setting a custom path for log files and re-enabling approximate matching for use with Transactions. - A few bugs were fixed as well. - Sources and precompiled Win32/Win64 libraries are available for download. New in version 2.0.0 (January 23rd, 2012) - It features a complete re-implementation of the Transaction support, now allowing an unlimited number of Transactions in parallel. - It integrates the Java and .NET APIs. - Sources, documentation, and prebuilt libraries for Win32 (including .NET and Java) are available on the (redesigned) webpage. New in version 2.0.0 RC3 (November 30th, 2011) - This version further stabilizes the 2.x branch and fixes all known issues from the previous rc2 release. - Performance was improved in many areas. - Sources and precompiled Win32 libraries are available for download on the Web page.
http://linux.softpedia.com/progChangelog/hamsterdb-Changelog-17717.html
CC-MAIN-2015-18
en
refinedweb
"Sean Hunter" <sean@uncarved.co.uk> writes:> > > > And they do that in order to get a unique number, not a random number.> > > > > > This is also a really bad idea, because with easily guessable pids you> > > are opening yourself to /tmp races.> > > > There can be only one process with a given pid at a time, so there can't> > be anything I'd call a race.> > You run your program, but I have created a simlink in /tmp with the> same name (because the name is guessable). That is a race because it> relies on contention between two processes (my "ln -s" and your broken> program) over a shared resource (the easily-guessable name in the> shared namespace of the filesystem). This is the definition of a> race. You may not call it that, but everyone else would.You're mixing two things together. A process can consider its pid to beunique in the sense that any other process that looks at *its* pid will see adifferent number. This is true and there's never a race involved. I believethis is what AC is saying.If the process' code assumes that certain objects with names based on its pid(e.g., temp files) will or won't exist (or whatever), then that's a badassumption, but it's not in itself a race either.If the process' code tries to determine or enforce the status of a file with acertain name in a non-atomic way (using stat or unlink) and then to "quickly"open a file with that name, *that* is a race. (Whether or not the namecontains the pid is irrelevant.)--Mike-- Any sufficiently adverse technology is indistinguishable from Microsoft.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
http://lkml.org/lkml/2000/1/13/126
CC-MAIN-2015-18
en
refinedweb
Squeezing More Guice from Your Tests with EasyMock One other thing was pointed out by several reviewers of my first article and I am in total agreement. Instead of TaxRateManager taking a Customer ID, it makes more sense for it to take a Customer instance and return the tax rate based on that instance. This makes the API a little cleaner. For this, you need to update the TaxRateManager interface, and the fake object implementation of it. public interface TaxRateManager { public void setTaxRateForCustomer(Customer customer, BigDecimal taxRate); public void setTaxRateForCustomer(Customer customer, double taxRate); public BigDecimal getTaxRateForCustomer(Customer customer); } And the new Implementation: @Singleton public class FakeTaxRateManager implements TaxRateManager { .... public void setTaxRateForCustomer(Customer customer, BigDecimal taxRate) { taxRates.put(customer.getCustomerId(), taxRate); } public void setTaxRateForCustomer(Customer customer, double taxRate) { this.setTaxRateForCustomer(customer, new BigDecimal(taxRate)); } public BigDecimal getTaxRateForCustomer(Customer customer) { BigDecimal taxRate = taxRates.get(customer.getCustomerId()); if (taxRate == null) taxRate = new BigDecimal(0); return taxRate; } } It's pretty minor stuff, but it creates a cleaner API to use the TaxRateManager now. The Invoice getTotalWithTax method now has to pass in the Customer rather than the ID: public BigDecimal getTotalWithTax() { BigDecimal total = this.getTotal(); BigDecimal taxRate = this.taxRateManager.getTaxRateForCustomer(customer); BigDecimal multiplier = taxRate.divide(new BigDecimal(100)); return total.multiply(multiplier.add(new BigDecimal(1))); } AssistedInject Mixing the factory pattern with your dependency injection has solved the style issues you introduced in the first article by using Guice, so you might be asking yourself why Guice doesn't offer something to do this for us because this would seem to be a fairly common situation. Well, Guice actually might offer it soon. Jesse Wilson and Jerome Mourits, a pair of engineers at Google, have created a Guice add-on called AssistedInject which formalizes the use of the factory pattern described above and makes it more Guicy as well. You can download and use the extension now, and a description of it is available on the Guice Google group. It it also going to be submitted into the core Guice project for future inclusion. Recap So, that's pretty much it. You can download the source code in the form of a NetBeans project that has been adapted to use both Guice and the factory pattern. You have corrected many of the style issues introduced in the first article when you added Guice to the application. What you should take away from this is that Dependency Injection, although very useful, is only one tool in the toolbox. It can be mixed with other software development practices and design patterns where appropriate, and where it makes sense. Used correctly, it can make your implementation and architecture more beautiful. If that is not the case, you are probably mis-using Guice and you should look for another, cleaner way of achieving the same thing (like in this case—using Guice to inject into a factory class instead, and then using the factory to create instances with immutable properties). Testing Both Sides of Your Class Looking at what you have so far, are you doing a good job of testing your Invoice class? Probably not; one test under ideal conditions is not very exhaustive. You get some points for adding a couple of different items to the Invoice and then asking for the total with tax—forcing it to sum up the cost of the items and apply the tax to that sum, but you are only testing one possible customer so you should probably make another call with a different customer and resulting tax rate, and ensure that the total is correct for that as well. What about customers that don't exist? Your implementation of the FakeTaxRateManager returns a 0 for the tax rate in this case—in other words, it fails silently. In a production system, this is probably not what you want; throwing an exception is probably a better idea. Okay, say you add a test for another customer that exists with a different tax rate, and check the total for that is different from the first customer. Then, you add another test for a non-existent customer and expect an exception to be thrown. Are you well covered then? I would like to also make sure that these Invoice instances don't interfere with each other, so a test to add items to different Invoices with different Customers to make sure there is no effect from adding items in one invoice to the total in the other seems like a good idea too. All of this sounds very exhaustive—surely you have tested your little class well if we do all of this? At least, that is what you would think if you were only used to looking at the one side of your class, the API calls (or if you like, the input to your class). There is another side though, what this class calls—in particular, the calls it makes to the TaxRateManager. You can tell that the calls are probably pretty near correct because you are getting back the amounts you are expecting, but suppose Invoice is very inefficient and calls the TaxRateManager ten times instead of one to try and get the answer? In a system that needs to be highly performant, that is unacceptable overhead. You want to make sure that, for a given call to getTotalWithTax, only one lookup call is made, for the correct customer, to the TaxRateManager. Page 2 of 5
http://www.developer.com/design/article.php/10925_3688436_2/Squeezing-More-Guice-from-Your-Tests-with-EasyMock.htm
CC-MAIN-2015-18
en
refinedweb
#include "petscao.h" PetscErrorCode AOCreateMemoryScalableIS(IS isapp,IS ispetsc,AO *aoout)Collective on IS Notes: The index sets isapp and ispetsc must contain the all the integers 0 to napp-1 (where napp is the length of the index sets) with no duplicates; that is there cannot be any "holes". Comparing with AOCreateBasicIS(), this routine trades memory with message communication. Level:beginner Location:src/vec/is/ao/impls/memscalable/aomemscalable.c Index of all AO routines Table of Contents for all manual pages Index of all manual pages
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateMemoryScalableIS.html
CC-MAIN-2015-18
en
refinedweb