text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Beginner Data Visualization & Exploration Using Pandas This tutorial will offer a beginner guide into how to get around with Pandas for data wrangling and visualization. Pandas is an open source data structures and data analysis tool for python programming. As we saw from this article Python is the most popular data science language to learn in 2018. The name Pandas is derived from the word Panel Data — an Econometrics from Multidimensional data. This tutorial will offer a beginner guide into how to get around with Pandas for data wrangling and visualization. Import Pandas We start by importing pandas and aliasing it as pd to give us a shorthand to use in our analysis. import pandas as pd Pandas allows you to import files in various formats. The most popular format is CSV. The first step is to assign the file you are going to load to a variable in order to be able to manipulate the data frame later in your analyses. A data frame is basically the representation of the rows and columns in your dataset For A csv file df = pd.read_csv(‘pathtoyourfile.csv’) For An Excel File df = pd.read_excel(‘pathtoyourfile.xlsx’, sheetname=’nameofyoursheet’) Reading an online HTML file Pandas can also read off HTML tables online using the following command df = pd.read_html(‘linktoonlinehtmlfile’) You might need to install the following packages for this to work pip install Beautifulsoup htmllib5 lxml To illustrate some of the things you can do with pandas we shall use tweetsfrom major incubators that I collected for the year 2017. To view the first five items we call the head command on the dataset. Similarly to view the last five elements in the dataset we use the tail function. It is usually important to check the data type of the columns as well as if there are null values. This can be achieved using the info command. From this we can be able to tell that our dataset has 24933 entries, 5 columns and they are all non null. This will not always be case. In the event that some of the rows are null we would have to deal with them appropriately depending on the situation at hand. One way is to drop them and the other way is to fill them. Let’s assume that we had a age column in our dataset representing the age of the person who sent out the tweet. We would fill it with the mean as follows df[‘age’].fillna(value=df[‘age’].mean()) We could also decide to drop them this way df.dropna() and this will drop all columns with null values. Dealing with null values is very important because it can affect the kind of insights that you draw from the data. You can also use this method to check for null values As we saw earlier this dataset has no null values. Group By We might want to group all our tweets by the username and count the number of tweets each organization had. We might also be interested in seeing the top 10 organizations with most tweets. We use Sort_values to sort the data frame by the number of tweets. Sum Since all organizations have retweets let’s find out which organization had the most retweets. We can achieve this by grouping the tweets by the username of the organization and summing the retweets.
https://www.kdnuggets.com/2018/10/beginner-data-visualization-exploration-using-pandas-beginner.html
CC-MAIN-2019-13
refinedweb
566
62.58
- Calculate Area and Circumference of a Circle This article covers a program in Java that find and prints area and circumference of a circle. Both area and circumference will get calculated based on the radius provided at run-time of the program. Note - The area of a circle is calculated using the formula 3.14*r*r. Where r is the radius value of circle. Note - The circumference of a circle is calculated using the formula 2*3.14*r. Where r is the radius value of circle. Note - In above formulae, 3.14 is the value of Pi or π Find Area of Circle in Java The question is, write a Java program to calculate area of circle. The answer to this question is the program given below: import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { float r, area; Scanner s = new Scanner(System.in); System.out.print("Enter the Radius of Circle: "); r = s.nextFloat(); area = (float)(3.14*r*r); System.out.println("\nArea = " +area); } } The snapshot given below shows the sample run of above Java program with user input 5 as radius of that circle, whose area we want to find and print using the program: In the following statement, of above program: area = (float)(3.14*r*r); the float() is used to convert whatever the value comes out from 3.14*r*r, into floating-point, to avoid Incompatible Error, i.e., possible loosy conversion from double to float. That is, because the type of area variable is declared as float, therefore we need to convert the value comes from 3.14*r*r into float Find Circumference of Circle in Java The question is, write a Java program to calculate circumference of a circle. The program given below is its answer: import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { float r, circum; Scanner s = new Scanner(System.in); System.out.print("Enter the Radius of Circle: "); r = s.nextFloat(); circum = (float)(2*3.14*r); System.out.println("\nCircumference = " +circum); } } Here is its sample run with same user input as of previous program. That is, 5 as radius of circle: Calculate Area and Circumference of Circle in Java This is basically the combined version of previous two programs. That is, I've combined both the program, so that in a single program, we can see the area and circumference both at one execution of the program. import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { float r, area, circum; Scanner s = new Scanner(System.in); System.out.print("Enter the Radius of Circle: "); r = s.nextFloat(); area = (float)(3.14*r*r); circum = (float)(2*3.14*r); System.out.println("\nArea = " +area); System.out.println("Circumference = " +circum); } } The sample run of above program, with user input 4.2, is: Same Program in Other Languages - C Calculate Area Circumference - C++ Calculate Area Circumference - Python Calculate Area of Circle - Python Calculate Circumference of Circle « Previous Program Next Program »
https://codescracker.com/java/program/java-program-calculate-area-circumference.htm
CC-MAIN-2022-21
refinedweb
508
57.77
Control Components MVC in Brief ZK framework supports the MVC design pattern to develop a web application. This pattern separates an application into 3 parts: Model, View, and Controller. The Model is the data that an application handles. The View is UI which indicates a ZUL page in a ZK-based application. The Controller handles events from UI, controls the UI, and accesses the Model. For complete explanation, please refer to ZK Developer's Reference/MVC. Spreadsheet Properties Each component is represented by a unique tag name, e.g. Spreadsheet is <spreadsheet> in a ZUL page. The easiest way to control a component is to set a component's properties via a tag's attribute. Each property has its own effect, and you can change it by specifying values in a tag's attribute. Excel File Path The simplest way to load and display an Excel book file is setting Spreadsheet's src attribute to the file path which is a relative URI with respect to the web application root. <spreadsheet src="/TestFile2007.xlsx" .../> - In this case, TestFile2007.xlsx is under the web application's root folder. Some UI part's visibility is configurable like toolbar, formula bar, sheet bar, and context menu. Toolbar The showToolbar attribute controls toolbar's visibility, and it only accepts boolean literal. Default: false <spreadsheet showToolbar="true"/> Formula Bar The showToolbar attribute controls formula bar's visibility, and it only accepts boolean literal. Default: false <spreadsheet showFormulabar="true"/> Sheet Bar The showSheetbar attribute controls sheet bar's visibility, and it only accepts boolean literal. Default: false <spreadsheet showSheetbar="true"/> Context Menu The showToolbar attribute controls context menu's visibility, and it only accepts boolean literal. Default: false <spreadsheet showContextMenu="true"/> Selection Visiblility Since 3.8.1 Default: true When it's true, ZSS keeps the selection area visible after opening a dialog like "Format Cell". Specify false to remove this behavior. <spreadsheet keepCellSelection="false" .../> If you want to change the default value to false; you can do that by setting the library property org.zkoss.zss.ui.keepCellSelection to false[1] Preloaded Column / Row Size In order to speed up rendering cells, zk spreadsheet caches a range of cell data at client-side to avoid reach out of the cached range, ZSS takes more time to get the cached data from a server. Pasting Lots of Cells Requires Larger Preloaded Size If you paste lots of cell from Excel which is over cached cell range, ZSS will fail to handle the pasting because of lack of corresponding cached cell data at the client-side (you should see an error message in the developer tool / Console tab). In such case, you need to increase preloaded row/column size according to the expected row/column size for copying. Underlying Details There are 3 ranges behind the scene: - visible range: the viewport that shows a sheet. By default is 40 column * 50 rows. It changes if resize a browser window, or scrolling, etc. - rendered range: the range with rendered DOM of cells. Spreadsheet could also render hidden rows if the next visible row is within the range, e.g. if row 25th~30th are hidden, but row 31th is in the range, then Spreadsheet still renders row 25th~30th. - cached range: the browser cached cell data. When the visible range moves by user scrolling, ZSS renders DOM of cells from the cached. So the rendered range becomes larger. If the cache does not cover the whole visible range, ZSS the "visible range". Max Visible Rows and Columns The attribute maxVisibleColumns controls the maximum visible number of columns in Spreadsheet. The minimal value of the attribute must be larger than 0. For example, if you set it to 40, it will allow showing only column A to column AN. (Since 3.8.1) If you set this attribute to 0 or just do not set it, ZK Spreadsheet will detect the sheet content and show as more columns as needed. However, it will show at least 40 columns if you have a smaller sheet. Similarly, the attribute maxVisibleRows controls the maximum visible number of rows in Spreadsheet. You can use above 2 attributes to set up the visible area according to your requirement. (Since 3.8.1) If you set this attribute to 0 or just do not set it, ZK Spreadsheet will detect the sheet content and show as more rows as needed. However, it will show at least 200 rows if you have a smaller sheet. Usage: <spreadsheet maxVisibleRows="200" maxVisibleColumns="40"/> Other Inherited Properties There are other properties inherited from parent component you can set, such as width, or height. For the complete list, please look for those inherited setter methods in the javadoc Spreadsheet. Each setter means a corresponding attribute, for example: setWidth() <spreadsheet width="100%"> setHeight() <spreadsheet height="100%"> Controller After we create a ZUL page, we can apply a Controller to handle events and control components of the page. In ZK, the simplest way to create a Controller is to create a class that extends <window title="My First ZK Spreadsheet Application" apply="org.zkoss.zss.essential.MyComposer" border="normal" height="100%" width="100%"> <spreadsheet id="ss"src="/WEB-INF/books/startzss.xlsx" height="100%"width="100%" maxVisibleRows="150" maxVisibleColumns="40" showToolbar="true" showSheetbar="true" showFormulabar="true"/> </window> - on a ZUL Page After applying a controller, we can easily get a component object on the zul with the help of SelectorComposer and manipulate the component to fulfill our business requirement. Steps to get a component: - Declare a member variable with the same type as the component you want want to initialize components in a Controller, you should override doAfterCompose(). For complete explanation, please refer to ZK Developer's Reference/MVC/Controller/Wire Components. Let's see an example to get Spreadsheet component in index.zul: @Wire usage public class MyComposer extends SelectorComposer<Component> { @Wire Spreadsheet ss; @Override public void doAfterCompose(Component comp) throws Exception { super.doAfterCompose(comp); //wire variables and event listeners //access components after calling super.doAfterCompose() } } - Line 3,4: If you specify nothing in @Wire, ZK will use the variable name as a component's id to look for matching component in the ZUL page. In our case, ZK will try to find a Spreadsheet component whose id is ssin index.zul. - Line 7: Override this method to write initializing codes in it. - Line 8: Remember to call super.doAfterCompose()before you accessing components because parent class wires, there are setShowToolbar() and isShowToolbar() corresponding to attribute showToolbar. You can read Javadoc for complete list of getter and setter. Setter usage public class MyComposer extends SelectorComposer<Component> { @Wire Spreadsheet ss; @Override public void doAfterCompose(Component comp) throws Exception { super.doAfterCompose(comp); //wire variables and event listeners //access components after calling super.doAfterCompose() if (isConditionOne()){ ss.setShowToolbar(true); ss.setSrc("/books/firstFile.xlsx"); } } } - Line 11,12: Using API allows you to set a component dynamically upon different conditions. Handling Events In most scenarios, the controller is usually used to listen to interested events of Spreadsheet and implement business logic to react the events. When a user interacts with a Spreadsheet, it will send various events according to user actions. Please refer to Handling Events on how to listen events in a controller. To implement business logic, you definitely will need to access Spreadsheet data model. Refer to sections under Handling Data Model to know how to use it. All source code listed in this book is at Github.
https://www.zkoss.org/_w/index.php?title=ZK_Spreadsheet_Essentials/Working_with_Spreadsheet/Control_Components&oldid=53582
CC-MAIN-2022-33
refinedweb
1,245
55.03
Closed Bug 748193 Opened 10 years ago Closed 5 years ago Highlight Password Form Fields on http pages or with http submits Categories (Firefox :: Security, defect) Tracking () People (Reporter: tanvi, Unassigned) References (Blocks 2 open bugs) Details (Whiteboard: [fxprivacy]) Attachments (2 files, 1 obsolete file) Phase 1 of Outline the password field in red in the following 3 cases: * A user is asked to login on an http page. The login form submits to an http destination. Users password is sent in cleartext. * A user is asked to login on an https page. The login form submits to an http destination. Users password is sent in cleartext. * A user is asked to login on an http page. The login form submits to an https destination. An attacker can mitm the first request to the login page and replace the form with one that submits the password to the attackers webpage instead. Assignee: nobody → tanvi Would the description of each case appear when hovered via the title attribute? Otherwise I foresee many bugs filed about the red outline being buggy page rendering, especially in that third case. Justin - I am considering putting my code in nsLoginManager.js and wanted to see what you think about that. In some ways, the features are similar (deal with passwords, go through the page looking for forms, etc). And in some way they are different (my feature is not limited to 3 passwords on a page, can't be turned off with a preference, etc). I've started hacking this together, adding code to _webProgressListener's onStateChange that calls _hightlightCleartextPasswords(). This is a new internal method in LoginManager. Before I continue any further, I wanted to know if this is the right place. If it is, I can make some changes to the _getPasswordFields so that we only have to go through the html once when looking for password elements. Also, I'd like to see if we can prevent autofill of passwords on these insecure pages, and instead present the user with the multi-user experience (where they first select the username, and then the password is filled in). I will attach my pseudocode (that compiles, but doesn't really work yet). Thanks! Beginning of patch. Comment on attachment 617745 [details] [diff] [review] First hack Review of attachment 617745 [details] [diff] [review]: ----------------------------------------------------------------- I think we should include an about:config pref for this. It wouldn't be hard to, and it would be there for users who run in to issues with it. Can we reuse some of the code for HTML5 form validation? IIRC, that code puts an outline around elements that aren't passing validation. We'll probably also need a way for the browser to explain why there is an outline or what is happening. ::: toolkit/components/passwordmgr/nsLoginManager.js @@ +277,5 @@ > // STATE_START is too early, doc is still the old page. > if (!(aStateFlags & Ci.nsIWebProgressListener.STATE_TRANSFERRING)) > return; > + var pwClear = false; > + pwClear = _highlightCleartextPasswords(domDoc); nit: please use |let| instead of |var| @@ +1357,5 @@ > + var pwClear = false; > + > + if(documet.location.protocol != "https:") { > + http=true; > + } let isHttp = document.location.protocol != "https:"; @@ +1359,5 @@ > + if(documet.location.protocol != "https:") { > + http=true; > + } > + > + if(!forms || forms.length==0) { nit: space after if @@ +1362,5 @@ > + > + if(!forms || forms.length==0) { > + return; > + } > + else { } else { @@ +1363,5 @@ > + if(!forms || forms.length==0) { > + return; > + } > + else { > + for(var i=0; i<forms.length; i++) { for each (let form in forms) @@ ? @@ +1408,5 @@ > + * _getAllPasswordFields > + * > + */ > + _getAllPasswordFields : function(form) { > + for (var i = 0; i < form.elements.length; i++) { return form.elements.filter(function(element) { return (element instanceof Ci.nsIDOMHTMLInputElement && element.type == "password"); }); Attachment #617745 - Flags: feedback+ (In reply to Jared Wein [:jaws] from comment #4) > Comment on attachment 617745 [details] [diff] [review] > >. (In reply to Mardeg from comment #1) > Would the description of each case appear when hovered via the title > attribute? Otherwise I foresee many bugs filed about the red outline being > buggy page rendering, especially in that third case. Yes, I'm still working out what this will look like (placeholders, html5 custom constraint validation, etc). >. > We can also add an about:config pref to turn this off. I'm not sure I agree with having it off by default and then turning it on for a later version of firefox, but we'll see how it goes since we are still in the very early stages. > @@ ? I haven't worked out how this is going to be implemented yet. I was talking to David Keeler and he said we might need to use overlays like you do with click to play. I need to investigate more here. Thanks for the feedback Jared! A few general concerns: 1) I'm a little dubious about the feature in general... It's not something the user can really fix or change, and the end result is likely to be the browser scaring them over something they'll just have to end up ignoring. [Perhaps there's value in shipping it disabled by default, for the minority of users who understand what's going on.] 2) Password manager isn't quiiiite the right place for this, but I guess it's close enough. 3) Performance concerns. It's not really great to be running script that trudges through every page that loads (unavoidable for password manager, but it tries hard to avoid doing work it doesn't need to. If you code only ran for a small set of sites that's probably ok. Otherwise we should find something better. The good news on #3 is that we've been talking about ways that Gecko's content code could drive password manager, instead of how it works now. The nutshell idea is that whenever content creates a password field, pwmgr gets some notification of that (and otherwise does nothing). That limits the overhead of pwmgr to only pages that have password fields. That would likely work for what you're doing here too. [Perhaps, depending on desired UX, it could also be a CSS pseudoclass. That would be cheaper still.] That said, the perf impact is of more concern for an actual landing, and continuing to prototype/experiment in pwmgr is fine. I share @dolske's concerns re 1. But instead of making it default off, how about Mozilla ship with a list of sites that do have HTTPS support and redirect automatically for those sites? For the remaining sites, the feature can stay default off. This is similar to how Chrome ships with a list of HSTS sites preloaded (and Mozilla plans to do the same) (I meant, list of sites with login pages that support HTTPS) Version: 15 Branch → unspecified Assignee: tanvi → nobody Flags: firefox-backlog? Flags: firefox-backlog? → firefox-backlog- There is a similar proposal in bug Looks like these are from HTMLMediaElement. Perhaps another null check was missed? (In reply to Tanvi Vyas [:tanvi] from comment #11) > Looks like these are from HTMLMediaElement. Perhaps another null check was > missed? >- > 43aac2141105 Sorry :( Wrong bug. Comment on attachment 617745 [details] [diff] [review] First hack This bug is currently on hold since I'm unsure of what the right UI is for this and I'm unsure if we can actually ship this. Attachment #617745 - Attachment is obsolete: true This is just a proof of concept patch. Here is a screenshot of the UI it produces: There is still much more work to be done here: * Add about:config pref to turn this on/off * Make the "try secure version" button work * Add telemetry probes. * Fix up the text. Add links to Learn More pages * Write the Learn More pages. Potentially: * Integrate with existing door hanger so we don't get two doorhangers. For some cases (ex: fill) this makes sense. For others (ex: capture) it might not. * Do a preflight for the https version before offering the user an option to try the secure version Open Questions: * When does the doorhanger pop open? Likely on focus of the username or password field. * What happens when password is autofilled already? Does the doorhanger pop up automatically? * What should the mainAction be? "Got it" or "try secure version". If we have data suggesting that the secure version exists (ex: password manager record for https page), then "try secure version" could be the first option. We need to be careful to weigh the code complexity here with the benefit of switching around mainActions. Assignee: nobody → tanvi Status: NEW → ASSIGNED Comment on attachment 8595442 [details] [diff] [review] poc - insecurepass-04-21-15.patch >diff --git a/toolkit/locales/en-US/chrome/passwordmgr/passwordmgr.properties b/toolkit/locales/en-US/chrome/passwordmgr/passwordmgr.properties >--- a/toolkit/locales/en-US/chrome/passwordmgr/passwordmgr.properties >+++ b/toolkit/locales/en-US/chrome/passwordmgr/passwordmgr.properties >@@ -43,3 +43,8 @@ > removeAllPasswordsTitle=Remove all passwords > loginsSpielAll=Passwords for the following sites are stored on your computer: > loginsSpielFiltered=The following passwords match your search: >+insecurePasswordMsg=Passwords entered on HTTP pages are easily exposed to attackers. >+insecurePasswordMsgOKButtonText=Got it >+insecurePasswordMsgOKAccessKey=g >+trySecurePageButtonText=Try https version of the page >+trySecurePageButtonAccessKey=t." (In reply to Dave Garrett from comment #15) >." Yes; the text is only a placeholder right now. It needs lots of changes. Thank you for your suggestion! Tanvi - can Javaun's team take the UX piece of this - we want to give feedback on the password field itself since its more actionable than notifying before a user interacts with a password field. The same content would exist however we're not sure we can reliably offer a secure version if we don't know if one exists. Don't want to do duplicate work and with the new control center design, the password icon will be integrated into the control center most of the time. There is a saying, that a good worker never uses a spanner as a hammer. SSL is not the right tool for protecting passwords from misappropriation. Hashing is the correct approach. Like hitting a nail with a spanner it may prove partially effective, but encouraging the improper use of tools is almost always a bad idea. As a Web app coder, I see a problem here in that if browsers keep telling users that a site is insecure because the app has been installed on a non-SSL host, then that damages our reputation. Because, the site visitor will most likely not appreciate the reason for this situation and will badmouth the app itself, not the person who deployed it. A simple alternative might be to check for unscripted password submission -in other words a plain Submit button and no scripting in the form action URL- since that will almost always imply that the password is not being hashed. Of course, scripting of the submit action does not necessarily imply password hashing, but at least this check will trap the worst cases. Yes, TLS is the right tool to protect users when they enter a password into a form on some web page. Hashing the password before sending it to the server won't do anything good if the submission page is HTTP and can simply be MITM'ed by anyone between the user and the server. As soon as the user enters the password an attacker can simply inject a script that record key strokes or do something similar. That's why we not only have to ensure it is submitted *to* a secure site but also submitted *by* a secure page. MVP release version with new icon (pending final visual design) and control center panel feedback if password field exists on an http page. agrigas@mozilla.com: Minor bug in mvp v1.png: "You're login could be compromised" should be "Your ..." If you resolve a bug (bug 261294) marked as [ Importance: P1 enhancement || Rank: 4 ] as a duplicate of another (this), is it standard practice at Mozilla to carry that importance and rank over? Tim, I don't really understand the heavy emphasis on MITM attacks. How prevalent are they? Do you have any stats for that? The security reports I read show a depressingly repeating pattern of SQL code injections, cross-site scripting, buffer overflow exploits etc. These are all situations where failure to protect the password in disk or RAM storage is the key concern. TLS is of no help in such cases, because it automatically decrypts the password on arrival.? (In reply to Ian Macdonald from comment #24) >? They should at least host their login form on an https site that either doesn't have ads or uses SSL ads. Even if logging into that site doesn't really leak any personal information, many users use the same username and password for their email, banking, social networks, etc. So exposing credentials on one site over HTTP, exposes credentials for all sites. cross-site scripting, buffer overflows, etc are all security issues that need to be resolved. That doesn't mean we shouldn't protect against cleartext passwords flowing through the internet for anyone to come by and read. TLS is to ensure transport security and protect against this specific threat. "..many users use the same username and password for their email, banking, social networks, etc." Which is exactly the reason that a salted hash should be used. With only SSL, the password can be read in plaintext from the server, and freely used on other sites. It is extremely unlikely that a salted hash of a password will work on other sites, even if the same plaintext password was used. In any event the key question is, how prevalent are MITM attacks in relation to the other classes of vuln? There is plenty of verifiable evidence that SQL code injections are extremely prevalent. Do MITM attacks constitute a similar level of threat, or not? I think we need some sort of concrete figures on this one. Without such figures we may be simply chasing a straw man and ignoring the real issue. Related but not duplicate: bug 1185145 Firefox should warn if using HTTP basic auth without TLS Tracking in [fxprivacy] Flags: firefox-backlog- → qe-verify? Whiteboard: [fxprivacy] has landed to use degraded security UI when a password field is present on an HTTP page. has been filed to implement contextual feedback to password fields (as designed in /. In this bug, I was using the key doorhanger for password manager. If we are going to degrade the security UI and give contextual info in the password field dropdown, then we don't need to use the key icon at all. Hence, this might now be a duplicate of. But not marking it yet, want to make sure 1217162 is the way we are planning to go forward here. Assignee: tanvi → nobody Status: ASSIGNED → NEW I guess the message shown has a small issue as described in bug 1342414 This to me looks like it has been resolved by Bug 1217142 dependencies. Please reopen if this isn't the case. Status: NEW → RESOLVED Closed: 5 years ago Resolution: --- → FIXED
https://bugzilla.mozilla.org/show_bug.cgi?id=748193
CC-MAIN-2022-33
refinedweb
2,494
63.9
Hello, I am attempting to import a text file with 6,000,000 characters (5,000 cases and 228 variables) into SPSS. Unfortunately the data file was generated with white space (space between variables ranging from a single space to many spaces) as its delimiter method. SPSS reads the first space as the delimiter, and then reads all subsequent spaces as missing data, which throws off my import. Is there a way to either A) fix a text file delimited by white space to a single space delimiter method? OR B) tell SPSS to read with white space delimiters? OR C) tell SPSS that there are no missing variables in the import file, so that it is tricked into ignoring the whitespace? I would appreciate any help! Thank you. Answer by jkpeck (5855) | Apr 10, 2018 at 01:34 PM The easiest way would be to preprocess the file using the code below and then read with the text wizard (or GET DATA). This code replaces any string of blanks with a single blank. Change the input and output file specs as appropriate. Be sure to keep the r preceding the quoted literals and maintain the indentations. This site is bad at showing indentation, so the line starting with ---- should be indented a few spaces (and remove the ----) begin program. import re fin = open(r"c:/temp/spaces.txt") fout = open(r"c:/temp/lessspace.txt", "wb") for line in fin: ----fout.write(re.sub(" +", " ", line)) fout.close() end program. Would i run this code as syntax in SPSS? Sorry, I am kind of a novice when it comes to this stuff. Thank you so much for taking the time to respond by the way, this has been a pain for days! Answer by RonnocN (41) | Apr 10, 2018 at 02:47 PM Thank you so much! I cannot express how helpful you were! Thank you again, Connor 139 people are following this question. How do you use the SPSS Fixed File command to import data from a txt document? 0 Answers Error importing .csv into SPSS: (2272) Duplicate variable name (only first 64 characters are counted): 5 Answers SPSS, excel file "in use", except file not open? 2 Answers N (OF CASES) and ALTER TYPE 8 Answers NLR command issue 4 Answers
https://developer.ibm.com/answers/questions/441660/importing-txt-data-into-spss-when-data-are-white-s.html
CC-MAIN-2019-35
refinedweb
383
74.69
In the case of inputting three states from the user, the checkbox fails. We can use Dropdown but it doesn't look so good. So, I thought of creating a triple-state-checkbox, just like Win Forms. This control uses the CSS to show three states of the checkbox. Currently, it returns 255 (byte's max value) for not selected, 0 - not selected and 1 for selected. This control is an extension of composite control which uses the Image and a Label control to Download the attached ZIP file which has a project named "YControls". Attach this project to your existing project or directly use the DLL from the BIN/debug folder of YControls project. Registering the control to be available on all Forms. You can use any tagPrefix are per your stadards. <pages> <controls> <add namespace="YControls" assembly="YControls" tagPrefix="YControls" /> </controls> </pages> Style.CSS and Images folder is currently bundled for the sample application. These are required. These can be changed as per requirement and themes can be applied. The application is not something which is differently coded, but the idea of trip-state-checkbox is new to web based forms, and found very useful in the cases of Yes/No/Default or anything which has something to do with triple-state. This is the first version of triple-state-check
http://www.codeproject.com/Articles/17983/Triple-State-Checkbox-for-Web?PageFlow=FixedWidth
CC-MAIN-2013-20
refinedweb
223
65.93
CSmtpProxyMT is a C++ class that provides developers with a simple way to implement a multi-threaded SMTP proxy server in their applications. You can use it in MFC projects as well as in non-MFC C++ projects. I'd like to thank my good friend Colin Davies for his idea of inserting signatures into mails at the SMTP level instead of at the mail client level. CSmtpProxyMT _beginthreadex #include "SmtpProxyMT.h" CSmtpProxyMT *m_smtpproxy; m_smtpproxy = new CSmtpProxyMT; m_smtpproxy->SetSignature("Hello World from Nish"); m_smtpproxy->StartProxy("192.168.1.44",25,125); m_smtpproxy->StopProxy(); I'll list below the public member functions that you can use CSmtpProxyMT(); Remarks :- Constructs a CSmtpProxyMT object. Please remember that you should only create the CSmtpProxyMT object on the heap. int StartProxy(char *server, int port, int localport); Return Value :- This will return ERR_RUNNING if the proxy is already started. Else it will return OK_SUCCESS ERR_RUNNING OK_SUCCESS Parameters :- server - You specify the SMTP server here. You can use a domain name such as smtp.nish.com or you can use an IP address such as 202.54.6.60 server port - This is where you specify the SMTP port, usually 25 port localport - This is the SMTP proxy port. You can specify 25 here too, unless you are running the proxy on the same IP as the SMTP server. In that case use something else. localport Remarks :- This will start the SMTP proxy service. Currently if there is a host resolve problem you will not get notified. This is a TODO for me. int StopProxy(); Return Value :- This returns ERR_STOPPED if the proxy is not running. Otherwise it returns OK_SUCCESS ERR_STOPPED Remarks :- This stops the SMTP proxy service. BOOL SetSignature(char *sig); Return Value :- This will return true if the signature was successfully set. It will return false if the signature is too large. true false Parameters :- sig - This is used to specify the signature. It should be a null-terminated string. sig Remarks :- Currently there is a 2 KB limit which can be easily modified by editing the source. This version supports only text signatures. It's a TODO for me to add HTML signature options in the next version I am sure purist C++ programmers will frown on seeing my source code. I have not followed strictly proper coding standards nor have I tried to stick to perfect OOP concepts. There might be people who say, that this is not the way to write code. Well, that's the way I write code. Perhaps I need to change, but it's not going to be easy I have tested the class using Outlook Express 6.0. I have tested it with several combinations such as text only mails, html mails, mails with single/multiple attachments. So far I haven't had any problems. But I presume that there might be problems with some mail clients that use their own methods in sending mails. You'll also notice that I haven't commented the code much. But the code is self-explanatory. I don't usually comment my code and even when I do, the comments I put are obscure and often serve only to confuse my colleagues. Therefore I try to avoid comments. But there are comments where required. And if anyone requires any clarification, I'll be glad to oblige him or her. Thanks This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) BOOL CSmtpProxyMT::FoundBoundary(char *boundary, char *buff) { char *bnd_begin = strstr(buff, "boundary=\""); if(NULL == bnd_begin) { return FALSE; } bnd_begin += 10; char *bnd_end = strstr(p, "\"\r\n"); if (NULL == bnd_end) { return FALSE; } strncpy(boundary, bnd_begin, bnd_end - bnd_begin); boundary[bnd_end - bnd_begin] = '\0'; return TRUE; } this void func() { CSmtpProxyMT myProxy; _beginthreadex(<...>, &myProxy, <...>); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1576/CSmtpProxyMT-1-0?fid=2913&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None&fr=11
CC-MAIN-2017-34
refinedweb
658
64.81
CS::Network::Socket::iSocket Struct Reference A socket connection obtained from iSocketManager. More... #include <inetwork/socket.h> Detailed Description A socket connection obtained from iSocketManager. Definition at line 70 of file socket.h. Member Function Documentation Accept a currently pending connection while listening from them on a bound address. This will block until a connection is ready to be accepted if there's non currently waiting. In order to check whether there's a connection pending in the queue use iSocketManager::Select and check whether this socket is ready to be read. If a pointer to a placeholder for an iAddress is passed it'll hold the address of the peer that connected on success. Returns a socket for the established connection on success, else nullptr is returned. Bind this socket to a local address. This is required before any communication can can occur for UDP sockets and is also required for TCP servers before any listening can occur. Connects to the peer specified by the passed iAddress. This is required for TCP sockets before any data can be transmitted. For UDP sockets connecting is optional and specifies that data may only be sent to and received from the specified peer. Returns true if a connection could be established and false if an error occured. Returns the address family this socket can work with. Obtain a textual description of the last error that occured. Note that due to platform restrictions this error may not have necessarily occured due to errors related to this socket. Returns the transport protocol this socket is using. Check whether this socket is associated with a peer. Returns true if it is, false otherwise. Check whether this socket is ready to transmit data. Returns true if it is, false otherwise. Start to listen for incoming connections on the the bound address. This is required before any connections can be accepted for TCP servers. The queue size specifies how many incoming connections that haven't been accepted, yet, may be waiting in a queue without being rejected. Receive data from the peer this socket is connected to if a connection was established. If no connection was established and this is a UDP socket data from any peer will be read. The read data will be written to the buffer passed and at most size bytes will be read. If a pointer to a placeholder for an iAddress was passed this placeholder will be filled with information about the peer from which the data was received. Returns the number of bytes actually received into the buffer or size_t(-1) if an error occured. Sends at most size bytes of data read from buffer to a peer. The peer the data is sent to is the one this socket is connected to if this socket is connect and otherwise the one specified by the passed address. It is an error to specify a peer if the socket is connected or is a TCP socket and to not pass a peer if the socket is not connected and a UDP socket. On success the actual numeber of bytes sent will be returned which will not exceed size. Otherwise size_t(-1) is returned to indicate an error. The documentation for this struct was generated from the following file: Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/structCS_1_1Network_1_1Socket_1_1iSocket.html
CC-MAIN-2017-30
refinedweb
558
65.12
New firmware release v1.7.7.b1 (REQUIRES NEW UPDATER TOOL) Hello, A new firmware release is out. The version is 1.7.7.b1. Here's the change log: Before upgrading to this version make sure to get the latest updater tool firs. Read this port for more info: esp32: Correct the OTAA (over the air update) functionality esp32: Update to the latest IDF. Solve FreeRTOS context switch crash. esp32: Update Sigfox module to work with the new Sigfox library v1.93 esp32: Update Sigfox library to the latest version (v1.93). esp32: Fix support for BLE indications on the GATT Server. esp32: Properly close GATT server and client connections. esp32: Only install the RMT driver for measuring pulses once. In order to get the new firmware it, please user the updater tool that can be downloaded from here: Cheers, Daniel - jmarcelino Is the git repository updated? - Jurassic Pork hello @Jurassic-Pork (answer to myself :-) ) i think that i have found where is the problem. To work with the DHT , i need to put the used pin in mode OPEN_DRAIN (input / output) . I initialize my classe like this : th = DTH(Pin('P3', mode=Pin.OPEN_DRAIN),0) it seems that the function pulses_get change the mode of the pin used to INPUT. So the second time i try a read , it doesn't work. with this change in my code : self.__send_and_sleep(0, 0.019) data = pycom.pulses_get(self.__pin,100) self.__pin.init(Pin.OPEN_DRAIN) self.__pin(1) it is now ok in my loop :-) - Jurassic Pork hello, it seems that there is a problem with the function pulses_get with this code : # pull down to low until 19 ms self.__send_and_sleep(0, 0.019) data = pycom.pulses_get(self.__pin,100) self.__pin(1) print(data) time.sleep(5) i acquire data from a DHT sensor. the first time i run this code it is OK , i have all the pulses data in the data variable. But if i run this code in a loop the second time , i have no data acquired. Maybe i am wrong somewhere but if someone can check this. - this.wiederkehr @daniel, @robert-hh I can confirm as well, that the sd-card error is gone with this release. Thanks! @jmarcelino It looks as if this function is useful for the DHT sensors. There are solutions using pure python, but they sometimes fail. @daniel Actually, my needs are for counting pulses, not measuring their duration (although that would certainly be useful, I don't have a specific use-case for it right now). My thinking on the pulse counting is that I'll have to wait until the ESP32's ULP is supported for it to be practical to do (i.e. pulse counting using interrupts or other methods require the module to be awake too much--at least once per pulse--whereas the ULP has the potential of being able to count pulses during deep-sleep). - Jurassic Pork @daniel hello, in my DHT pure python library here to read a DHT sensor (temperature and humidity) i read the pulses send by the sensor like this : def __collect_input(self): # collect the data while unchanged found unchanged_count = 0 # this is used to determine where is the end of the data max_unchanged_count = 100 last = -1 data = [] m = bytearray(800) # needs long sample size to grab all the bits from the DHT irqf = disable_irq() self.__pin(1) not very accurate because samples are over 10 us i have made a first try with the new function pulses_get like that : def read2(self): self.__send_and_sleep(0, 0.019) irqf = disable_irq() self.__pin(1) # collect data data = pycom.pulses_get(self.__pin,500) enable_irq(irqf) return data here is what i get : it seems to be more accurate that the first method ! (real time in us of the pulses) timing of the DHT : The first value for 1 level (88) seems to be the Start HI . We have the 40 data bits of the DHT ! to be continued .... @jmarcelino yes, it's microseconds. @Eric24 we do have plans to make the function more generic. Your input regarding your needs to measure pulses would definitely help. What kind of API do you think will be more helpful for you? - jmarcelino @Eric24 There is a pycom.pulses_get(pin,timeout) function but seems a bit primitive and "raw" at the moment - as far as I can see only used internally to calibrate the deep sleep timer. You do get a list of (signal_level, duration) tuples, where duration seems to be in microseconds (RMT channel clocks = APB_CLK (80Mhz) / clk_div (80) but I'm not completely sure) There's a set of fixed thresholds at the moment so it may be only useful for some applications. @daniel said in New firmware release v1.7.7.b1 (REQUIRES NEW UPDATER TOOL): RMT driver for measuring pulses What exactly is the "RMT driver for measuring pulses"? We are looking at various pulse counting options, so I'm interested in learning more about this. - BetterAuto Pybytes Beta Dost mine eyes deceive me? OTAA?? Can't wait to see that working! Thanks Pycom team for all that you do. @robert-hh Besides the error code, OTA works. But there is something to be aware of, if OTA and wired flash is mixed. The firmware keeps two areas in flash, where the image can be stored using OTA, which are used alternatively (addresses 0x10000 and 0x1a0000). To determine which, a flash is kept. Flashing "make flash" and probably the updater always uses address 0x1000 without changing that flag. So if the active firmware is at 0x1a0000 after an OTA, and then "make flash" is then used to store a new image an 0x10000, this will not get active, since the startup code still assumes the active image at 0x1a0000. That state can reliably be reset by erasing the flash. Of course, one could also use another OTA load. @daniel said in New firmware release v1.7.7.b1 (REQUIRES NEW UPDATER TOOL): BTW, It's not mentioned in the release notes, but the SD card frequency has also been increased back to 20MHz. Yes, I noticed that, but forgot too mentioning it. @robert-hh yes thanks. The issues was caused by a very obscure bug inside FreeRTOS: esp32: Update to the latest IDF. Solve FreeRTOS context switch crash. That bug was timing related, that's why it was so erratic. BTW, It's not mentioned in the release notes, but the SD card frequency has also been increased back to 20MHz. Cheers, Daniel @daniel It seems, as if with this release the issue of unstable file writes is gone. I ran the loop&write test six times up to 2 million passes = 2048000000 Bytes successfully, at which time it stopped intentionally to avoid writing more than 2 GB into a file. The maximum size that can be written is 4 GB, but beyond 2 GB the file size is displayed wrong (int vs. uint mismatch). @daniel said in New firmware release v1.7.7.b1 (REQUIRES NEW UPDATER TOOL): esp32: Fix support for BLE indications on the GATT Server. I believe this is only for the server. I hope BLE indications for GATT Clients (GATTC) is fixed soon :) Currently, its not implemented.
https://forum.pycom.io/topic/1536/new-firmware-release-v1-7-7-b1-requires-new-updater-tool
CC-MAIN-2017-34
refinedweb
1,211
73.37
Hi All, Wondering if someone can help me finish off coding my script? It works the way I would like it to apart from one small problem which i've hit a brick wall over. i'll explain below. i am currently developing this page for my employers: on instructions from my employers they want me to remove the small palace cinema link on the page and instead when the page loads have the movie list visible instead. currently you will see the movie list if the search by movie or the small palace cinema link is clicked. i do have access to "most" files, i say most because under instructions im only to find a solution in the visSelect.aspx file. they will not give me access to the code-behind vb file. because if their software is updated in the future any changes i would make to the vb could be overwritten. i must say i am pretty much an amateur developer, with little experience dealing with aspx files. I have noticed a big difference in viewing the source in my browser live and editing the source locally. as if you do view the source of the page above there is a _doPostBack function but this is not editable locally when i edit the file. which i believe is quite important to what im hoping to achieve. i do believe this is because im dealing with aspx files and this function is likely to be kept in the code-behind vb file. PROBLEM: so i have developed this cookie script below based around the _doPostBack function, which does load the movie list like i want but unfortunately it does not load any screen times, when you click on your movie it loads the column but no times. i think this is because all the links use the _doPostBack function, but in my code i specify the Palace Cinema link only at least i think my code does. so im not sure why the movie times don't show up. to see what i mean about movie times on the current page (above) you have to click the cinema link first then of course choose your movie and then you should see the times. I am open to other solutions, but I believe i have exhausted my options and this is the closest i have been to completing it. So im hoping its just something missing or i can change in my script. Thanks. MY CODE: <script language="javascript" type="text/javascript"> <!-- function richloader(){ if(rich_get_cookie("richpal") == 0){ document.cookie = "richpal=1"; __doPostBack('1001',''); } if(rich_get_cookie("richpal") == 1){ document.cookie = "richpal=2"; } else{ delete_cookie ("richpal"); } } function delete_cookie ( cookie_name ) { var cookie_date = new Date ( ); // current date & time cookie_date.setTime ( cookie_date.getTime() - 1 ); document.cookie = cookie_name += "=; expires=" + cookie_date.toGMTString(); } function rich_get_cookie ( cookie_name ) { var results = document.cookie.match ( '(^|;) ?' + cookie_name + '=([^;]*)(;|$)' ); if ( results ) return ( unescape ( results[2] ) ); else return 0; } // --> </script> Last edited by rbrown; 04-02-2009 at 06:28 PM. There are currently 1 users browsing this thread. (0 members and 1 guests) Forum Rules
http://www.webdeveloper.com/forum/showthread.php?206132-Client-not-recognizing-server-changes&goto=nextnewest
CC-MAIN-2014-49
refinedweb
512
71.44
> Hello, I just want to know How to get object's X axis rotation EXACTLY as in inspector ? I tried transform.eulerAngles.x it works at the beginning but when the X axis in the inspector hits 83 the one in the script starts to decrease on its own, and when the one in the script hits 0 it goes up to 340 directly.. and I also tried transform.rotation.x and when the x axis in inspector hits like 10 the one in the script is something like 0.9465872 O_o So how to get the EXACT same value in the inspector's Transform window ? Thanks HI, if you call transform.rotation.x it will give you the cos(angle). I found this answer here: Answer by Bunny83 · Jun 06, 2018 at 12:44 AM The short answer is: You can't get the exact value as the editor will actually cache the value internally for editing. The problem is that Unity itself (the engine) does not work with euler angles at runtime but with quaterions. When you read transform.eulerAngles it will actually convert the quaterion to eulerAngles. Since eulerAngles are not a unique representation of an orientation there are several ways how the same orientation could be represented. Just as an example the eulerangles (0,0,0) is the same as (180,180,180) or (180, -180, 180). The editor tracks a seperate angle value for editing in the inspector. It's used when you actually edit the rotation in the editor. This value does not wrap around but simply continues to increase / decrease. If you want the eulerangles to be in a certain range, you need to adjust them yourself. For example if "y" should never be negative, just add 360 if the value that transform.eulerAngles.y returns is negative. X usually has the range of +- 90°. Anything larger could be represented by an angle smaller than 90 and y and z rotated by 180°. So the angle (100,30,0) would be the same as (80,210,180). As i said there simply is no "correct" or unique way how the eulerangles should represent a certain rotation. Since you haven't mentioned your actual problem you want to solve we can't suggest a different approach. You shouldn't rely on the eulerangles conversion in the first place. If you have never heard about Quaternions and you want to know how they work, have a look at this Numberphile video Answer by HarD-izzeR · Jun 06, 2018 at 10:15 AM Ok, Here is the problem: I made a day/night cycle script (just directional light rotating in X axis in a variable speed) I also have some low poly game objects (clouds) And I want the opacity of their materials (Aplha channel) to be equal to the sun movement So that if its day time the clouds would be visible and its fading out, and when its night time they are invisible and fading in.. All of this are working in my script but the timing of fade in/fade out are wrong... I tried to debug.log the rotation of x axis (which I didnt know its impossible..) and it came wrong... Here is my script: using System.Collections; using System.Collections.Generic; using UnityEngine; public class DayNight : MonoBehaviour { public GameObject[] clouds; public float speed = 1.0f; public float sunPos; void Update() { transform.Rotate(speed * Time.deltaTime, 0, 0); sunPos = transform.eulerAngles.x; // here is the problem clouds = GameObject.FindGameObjectsWithTag("Clouds"); foreach (GameObject item in clouds) { Color color = item.GetComponent<Renderer>().material.color; color.a = 1 - sunPos; item.GetComponent<Renderer>().material.color = color; } } } And here we have the problem. You try to use a value for something it's not meant for. You should first create a seperate time variable which defines the current time of your world and then just create an absolute rotation that matches the time. Again, eulerAngles are not a value stored in the transform but it's calculated from the object's rotation. You already have your "sunPos" variable. Rename it to "time" and reverse your logic: void Update() { time += speed * Time.deltaTime; if (time > 1f) { time = 0f; day++; } transform.rotation = Quaternion.Euler(time * 360,0,0); } This assumes "time" goes from 0 to 1. So if 0 is midnight, 0.5 will be noon. So if you multiply this "time" by 24 you actually get the current hour of the day. This will also allow you to set the current time more easily. So instead of relying on the current rotation of an object you actually dictate the rotation based on the time. Ok Thanks @Bunny83 I dont really understand... What is this day++ ? Will it be something like this : ? using System.Collections; using System.Collections.Generic; using UnityEngine; public class DayNight : MonoBehaviour { public GameObject[] clouds; public float speed = 1.0f; public float time; void Update() { time += speed * Time.deltaTime; if (time > 1f) { time = 0f; day++; } transform.rotation = Quaternion.Euler(time * 360, 0, 0); clouds = GameObject.FindGameObjectsWithTag("Clouds"); foreach (GameObject item in clouds) { Color color = item.GetComponent<Renderer>().material.color; color.a = 1 - time; item.GetComponent<Renderer>().material.color = color; } } } I just added "day++" to make it more clear that when time wraps around 1 day has passed. Keep in mind that the time value of 0 and a value of 1 actually means the same: midnight. Since you want to have the clouds invisible at night but visible during the day you probably want something like: color.a = Mathf.PingPong(time * 2, 1f); This will yield a value of 0 at a time of 0 as well as 1. However it will return a value of 1 at a time of 0.5 (noon). Have a look at this table: time | time*2 | Mathf.PingPong #------------------------------ 0 | 0 | 0 ---- 0.25 | 0.5 | 0.5 fading in 0.5 | 1.0 | 1 ---- 0.75 | 1.5 | 0.5 fading out 1.0 | 2.0 | 0 ---- Ok Thanks alot it works.. But just the timing is wrong... It should be the opposite.. When I play (I start at day (sun rot x is 0)) the clouds are invisible and fading in and when its night the clouds are visible and fading out. It should be the. How to specify the rotation angle in degrees, in the inspector? 2 Answers How do you apply a new rotation inside the unity editor? 2 Answers Duplicating rotation with axis constraints 1 Answer Dealing with rotating objects on a dynamic axis 0 Answers Missing active and static checkbox. 0 Answers
https://answers.unity.com/questions/1514780/how-to-get-objects-x-axis-rotation-exactly-as-in-i.html
CC-MAIN-2019-26
refinedweb
1,099
67.25
What we're making Seth Corker@darth_knoppix 🎬 Bounce animation with colour change in Framer Motion (React) #react #framermotion #webdev #webanimation09:01 AM - 29 Nov 2019 We're creating this bouncing animation using easeOut easing and yoyo. This creates a simple bounce which continues to loop. We also make use of keyframes instead of variants to specify the exact changes we wish to animate between. This is useful when changing the colour, we achieve this with some strategic transition properties. How to achieve the bounce and colour change If you'd like to see a video tutorial, here's one I prepared - it's about 4 minutes long and explains the process too. There are a couple of things to note in order to create this animation. Here is our BouncingBall component, first you'll notice the animate props use arrays instead of a single value or variant and second - we the magic happens in the transition prop. export default function BouncingBall() { return ( <div style={{ width: "2rem", height: "2rem", display: "flex", justifyContent: "space-around" }} > <motion.span style={ballStyle} transition={bounceTransition} animate={{ y: ["100%", "-100%"], backgroundColor: ["#ff6699", "#6666ff"] }} /> </div> ); } Using Keyframes in Framer Motion In previous Framer Motion animation tutorials I used variants or animated the properties directly. This time easy property within our animate object is a assigned an array value. This tells the motion component to treat the value changes as keyframes and sequentially set them. So, the y position will start at 100% and at the next frame it will become -100%. We do the same thing with the backgroundColor. Making the animation loop The transition property is the most important part of this animation. We define an object called bounceTransition, here we define how each property we are animating, actually performs the animation. The bounce is easy, we set yoyo to Infinity which means the animation will loop, when it reaches the end it will reverse the animation and continue playing. We set ease to easeOut to create the 'bounce'. This works well because it's smooth in the one part but has a sudden stop which produces the 'bounce' rather than a smooth movement which linear or easeInOut easing would give us. The colour change works by setting the same props as the y position animation and changing the duration to 0 so it's instantaneous and setting repeatDelay to twice the duration of our bounce animation (our bounce is 400ms so our delay is 800ms). We have two backgroundColor keyframes which will last 400ms each and continue to repeat. This creates the colour swap when the ball bounces. const bounceTransition = { y: { duration: 0.4, yoyo: Infinity, ease: "easeOut" }, backgroundColor: { duration: 0, yoyo: Infinity, ease: "easeOut", repeatDelay: 0.8 } }; Where to go from here? The animation achieves the effect but a good next step would be to apply some traditional animation techniques like squish and stretch to give it a less mechanical feel. This emphasises the motion by squishing the ball on impact and stretching it when it's in the air. Resources - To see the full source code, checkout the repo on GitHub (this also contains the other loading animation code from previous tutorials) - Check out my playlist of video tutorials covering animation in Framer Motion - Take a look at the official Framer Motion documentation Discussion (1) I’m working on something new. I’m using Framer Motion for transitions which you might commonly see on mobile. The horizontal scrolling is pure CSS, using scroll snapping. If you’d like to get a sneak preview of what I’m doing, follow me on twitter and send me questions!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/darthknoppix/framer-motion-bouncing-ball-animation-5ce3
CC-MAIN-2021-31
refinedweb
602
51.78
If you have a collection of methods in a file, is there a way to include those files in another file, but call them without any prefix (i.e. file prefix)? So if I have: [Math.py] def Calculate ( num ) [Tool.py] using Math.py for i in range ( 5 ) : Calculate ( i ) You will need to import the other file as a module like this: import Math If you don't want to prefix your Calculate function with the module name then do this: from Math import Calculate If you want to import all members of a module then do this: from Math import * Edit: Here is a good chapter from Dive Into Python that goes a bit more in depth on this topic.
https://codedump.io/share/zVcPYWA9cssc/1/how-to-include-external-python-code-to-use-in-other-files
CC-MAIN-2018-17
refinedweb
124
72.7
alive-progress alternatives and similar packages Based on the "Terminal Rendering" category. Alternatively, view alive-progress alternatives based on common mentions on social networks and blogs. rich9.7 9.8 alive-progress VS richRich is a Python library for rich text and beautiful formatting in the terminal. tqdm9.5 8.6 alive-progress VS tqdmA Fast, Extensible Progress Bar for Python and CLI CUTIE2.7 0.0 alive-progress VS CUTIECommand line User Tools for Input Easification Scout APM: A developer's best friend. Try free for 14-days Do you think we are missing an alternative of alive-progress or a related project? README [alive-progress logo](img/alive-logo.gif) alive-progress :) A new kind of Progress Bar, with real-time throughput, ETA, and very cool animations! Ever! 😃 [alive-progress demo](img/alive-demo.gif) I like to think of it as a new kind of progress bar for Python since it has among other things: - a live spinner that is incredibly cool, and clearly shows your lengthy process did not hang, or your ssh connection did not drop; - a visual feedback of your current processing, as the live spinner runs faster or slower with it; - an efficient multi-threaded bar, which updates itself at a fraction of the actual processing speed (1,000,000 iterations per second equates to roughly 60 updates per second) to keep CPU usage low and avoid terminal spamming (you can also calibrate this to your liking); - nice monitoring of both position and throughput of your processing; - an ETA (expected time of arrival) with a smart exponential smoothing algorithm, that shows the time to completion; - automatic print and logging hooks, which allows print statements and logging messages to work effortlessly, right amid an animated bar, and even enriching them with the current bar position when they occurred; - a nice receipt is printed when your processing finishes, including the last bar rendition, the elapsed time, and the observed throughput; - it detects under and overflows, enabling you to track hits, misses, or any desired count, not necessarily the actual iterations; - it automatically detects if there's an allocated tty, and if there isn't (like in a shell pipeline), only the final receipt is printed (so you can safely include it in any code, and rest assured your log file won't get thousands of progress lines); - you can pause it! I think that's an unprecedented feature for progress bars ANYWHERE — in Python or any other language — no one has ever done it! It's incredible to be able to get back to the Python prompt during any running process! Then you can manually adjust an item or prepare something, and get back into that running process as if it had never stopped!! All alive_barwidgets are kept as they were, and the elapsed time nicely ignores the paused time!; - it is customizable, with a growing smorgasbord of bar and spinner styles, as well as several factories to easily generate yours! Now (📌 new in 2.0) we even have super powerful and cool .check()tools in both bars and spinners, to help you design your animations! You can see all the frames and cycles exploded on screen, with several verbosity levels, even including an alive rendition! 😜 📌 NEW 2.1 series! YES! Now alive-progress has support for Jupyter Notebooks, and also includes a Disabled state! Both were highly sought after, and have finally landed! And better, I've implemented an auto-detection mechanism for jupyter notebooks, so it just works, out of the box, without any changes in your code!! See for yourself: [alive-progress demo](img/alive-jupyter.gif) It seems to work very well, but at this moment it should be considered experimental. There were instances in which some visual glitches have appeared, like two alive_barrefreshes being concatenated together instead of over one another... And it's something I think I can't possibly workaround: it seems Jupyter sometimes refresh the canvas at odd times, which makes it lose some data. Please let me know on the issues if something funnier arises. 📌 NEW 2.0 series! This is a major achievement in alive-progress! I took 1 year developing it, and I'm very proud of what I've accomplished \o/ - now there's complete support for Emojis 🤩 and exotic Unicode chars in general, which required MAJOR refactoring deep within the project, giving rise to what I called "Cell Architecture" => now all internal components use and generate streams of cells instead of chars, and correctly interprets grapheme clusters — it has enabled to render complex multi-chars symbols as if they were one, thus making them work on any spinners, bars, texts, borders, backgrounds, everything!!! there's even support for wide chars, which are represented with any number of chars, including one, but take two spaces on screen!! pretty advanced stuff 🤓 - new super cool spinner compiler and runner, which generates complete animations ahead of time, and plays these ready-to-go animations seamlessly, with no overhead at all! 🚀 - the spinner compiler also includes advanced extra commands to generate and modify animations, like reshape, replace, transpose, or randomize the animation cycles! - new powerful and polished .check()tools, that compile and beautifully render all frames from all animation cycles of spinners and bars! they can even include complete frame data, internal codepoints, and even their animations! 👏 - bars engine revamp, with invisible fills, advanced support for multi-char tips (which gradually enter and leave the bar), borders, tips and errors of any length, and underflow errors that can leap into the border if they can't fit! - spinners engine revamp, with standardized factory signatures, improved performance, new types, and new features: smoother bouncing spinners (with an additional frame at the edges), optimized scrolling of text messages (which go slower and pause for a moment at the edges), new alongside and sequential spinners, nicer effect in alongside spinners (which use weighted spreading over the available space), smoother animation in scrolling spinners (when the input is longer than the available space) - new builtin spinners, bars, and themes, which makes use of the new animation features - new showtime that displays themes and is dynamic => it does not scroll the screen when it can't fit either vertically or horizontally, and can even filter for patterns! - improved support for logging into files, which gets enriched as the print hook is! - several new configuration options for customizing appearance, including support for disabling any alive-progresswidgets! - includes a new iterator adapter alive_it, that accepts an iterable and calls bar()for you! - requires Python 3.6+ (and officially supports Python 3.9 and 3.10) Since this is a major version change, direct backward compatibility is not guaranteed. If something does not work at first, just check the new imports and functions' signatures, and you should be good to go. All previous features should still work here! 👍 This README was completely rewritten, so please take a full look to find great new details!! Get it Just install with pip: ❯ pip install alive-progress Awake it Want to see it gloriously running in your system before anything? ❯ python -m alive_progress.tools.demo [alive-progress demo-tool](img/alive-demo-tool.png) 😜: |████████████████████████████████████████| 1000/1000 [100%] in 5.8s (171.62/s) |██████████████████████████▋⚠︎ | (!) 1000/1500 [67%] in 5.8s (172.62/s) |████████████████████████████████████████✗︎ (!) 1000/700 [143%] in 5.8s (172.06/s) |████████████████████████████████████████| 1000 in 5.8s (172.45/s) Nice huh? Loved it? I knew you would, thank you 😊. To actually use it, just wrap your normal loop in an alive_bar context manager like this: with alive_bar(total) as bar: # declare your expected total for item in items: # <<-- your original loop print(item) # process each item bar() # call `bar()` at the end And it's alive! 👏 So, in short: retrieve the items as always, enter the alive_bar context manager with the number of items, and then iterate/process those items, calling bar() at the end! It's that simple! :) Master it itemscan be any iterable, like for example a queryset; - the first argument of the alive_baris the expected total, like qs.count()for querysets, len(items)for iterables with length, or even a static number; - the call bar()is what makes the bar go forward — you usually call it in every iteration, just after finishing an item; - if you call bar()too much (or too few at the end), the bar will graphically render that deviation from the expected total, making it very easy to notice overflows and underflows; - to retrieve the current bar count or percentage, call bar.current(). You can get creative! Since the bar only goes forward when you call bar(), it is independent of the loop! So you can use it to monitor anything you want, like pending transactions, broken items, etc, or even call it more than once in the same iteration! So, in the end, you'll get to know how many of those "special" events there were, including their percentage relative to the total! Displaying messages While inside an alive_bar context, you can effortlessly display messages with: - the usual Python print()statement, where alive_barnicely cleans up the line, prints your message alongside the current bar position at the time, and continues the bar right below it; - the standard Python loggingframework, including file outputs, are also enriched exactly like the previous one; - the cool bar.text('message'), which sets a situational message right within the bar, where you can display something about the current item, or the phase the processing is in; - and all of this works just the same in an actual terminal or a Jupyter notebook! [alive-progress printing messages](img/print-hook.gif) Auto-iterating (📌 new in 2.0)?! 😜, e.g.. Modes of operation Definite/unknown: Counters animations concurrently and independently of each other, rendering a unique show in your terminal! 😜 So, definite and unknown modes both use internally a counter to maintain progress. This is the source value from which all widgets are derived. Manual: Percentages On the other hand, manual mode uses internally a percentage to maintain any percentage to the bar() handler! For example, to set it to 15%, just call bar(0.15) — which is 15 / 100. You can also use total here! If you do provide it, the bar will infer an internal counter, and thus will be able to offer you the same count, throughput, and ETA widgets! If you don. The bar() handlers The bar() handlers support either relative or absolute semantics, depending on the mode: - counter modes use relative positioning, so you can just call bar()to increment the counter by one, or send any other positive increment like bar(5)to increment by those at once; - manual mode use absolute positioning, so you can just call bar(0.35)to instantly put the bar in 35% position — this argument is mandatory here! The manual mode enables you to get super creative! Since you can set the bar instantly to whatever position you want, you could: - make it go backward — perhaps to graphically display the timeout of something; - create special effects — perhaps to act as a real-time gauge of some sort. In any case, to retrieve the current count/percentage, just call: bar.current(): - in counter modes, this provides an integer — the actual internal counter; - in manual mode, this provides a float in the interval [0, 1] — the last percentage set. Summary When total is provided all is cool: When it isn't, some compromises have to be made: It's quite simple, you do not need to think about which mode you should use: Just always send the total if you have it, and use manual if you need it! It will just work the best it can! 👏 \o/ Maintaining an open source project is hard and time-consuming. I've put much ❤️ and effort into this. You can back me up with a donation if you've appreciated my work, thank you 😊 Customize it Styles Wondering what styles are builtin? It's showtime! ;) from alive_progress.styles import showtime showtime() [alive-progress spinners](img/showtime-spinners.gif) I've made these styles just to try the factories I've created, but I think some of them ended up very very cool! Use them at will, mix them to your heart's content! The showtime exhibit has an optional argument to choose which show to present, Show.SPINNERS (default), Show.BARS or Show.THEMES, do take a look at them! ;) [alive-progress bars](img/showtime-bars.gif) (📌 new in 2.0) And the themes one: [alive-progress themes](img/showtime-themes.gif) The showtime exhibit also accepts some customization options: - fps: the frames per second rate refresh rate, default is 15; - length: the length of the bars, default is 40; - pattern: a filter to choose which ones to display. For example to get a marine show, you can showtime(pattern='boat|fish|crab'): [alive-progress filtered spinners](img/showtime-marine-spinners.gif) You can also access these shows with the shorthands show_bars(), show_spinners(), and show_themes()! There's also a small utility called print_chars(), to help find that cool character to put in your customized spinners and bars, or to determine if your terminal does support Unicode characters. Configurations ↳ accepts a predefined spinner name, a custom spinner factory, or None bar: the bar style to be rendered in known modes ↳ accepts a predefined bar name, a custom bar factory, or None unknown: the bar style to be rendered in the unknown mode ↳ accepts a predefined spinner name, or a custom spinner factory (cannot be None) theme: [ 'smooth'] a set of matching spinner, bar, and unknown ↳ accepts a predefined theme name force_tty: [ None] forces animations to be on, off, or according to the tty (more details here) disable: [ False] if True, completely disables all output, do not install hooks length of titles, or 0 for unlimited ↳ title will be truncated if longer, and a cool ellipsis "…" will appear at the end spinner_length: [ 0] forces the spinner length, or 0for its natural one And there's also one that can only be set locally in an alive_bar context:'. ... Create your own animations Yes, you can assemble your own spinners! And it's easy! I've created a plethora of special effects, so you can just mix and match them any way you want! There are frames, scrolling, bouncing, sequential, alongside, and delayed spinners! Get creative! 😍 Intro: How do they work? The spinners' animations are engineered by very advanced generator expressions, deep within several layers of meta factories, factories and generators 🤯! - the meta factory (public interface) receives the styling parameters from you, the user, and processes/stores them inside a closure to create the actual factory => this is the object you'll send to both alive_barand config_handler; - internally it still receives other operating parameters (like for instance the rendition length), to assemble the actual generator expression of the animation cycles of some effect, within yet another closure; - this, for each cycle, assembles another generator expression for the animation frames of the same effect; - these generators together finally produce the streams of cycles and frames of the cool animations we see on the screen! Wow 😜👏 off the scene => this is all only one cycle! Then it is followed by another cycle to make it all backward. But the same bouncing spinner accepts repeating patterns in both right and left directions, which generates the cartesian product of all combinations, thus capable of producing dozens of different cycles!! 🤯 And there's more! They only yield the next animation frame until the current cycle is exhausted, then halt! to play cycles in sequence or alongside, and I can amaze you displaying several animations at the same time on the screen without any interferences! It's almost like they were... alive! ==> Yes, that's where this project's name came from! 😉 (📌 new in 2.0) A Spinner Compiler, really? ever changing their lengths or breaking the Unicode encoding altogether! Yes, several chars in sequence can represent another completely different symbol, so they cannot ever be split! They have to enter and exit the scene always together, all at once, or the grapheme won't show up at all (an Emoji for instance)!! Enter the Spinner Compiler...... This has made possible some incredible things!! Since this Compiler generates the whole spinner frame data beforehand: the grapheme fixes can be applied only once, and the animations do not need to be calculated again! So I can just collect all that ready to play animations and be done with it, so no runtime overhead at all!! 👏 Also, with the complete frame data compiled and persisted, I could create several commands to refactor that data, like changing shapes, replacing chars, adding visual pauses (frame repetitions), generating bouncing effects on-demand over any content, and even transposing cycles with frames!! But how can you see these effects? Does the effect you created look good? Or is it not working as you thought? YES, now you can see all generated cycles and frames analytically, in a very beautiful rendition!! I love what I've achieved here 😊, it's probably THE most beautiful tool I've ever created... Behold the check tool!! [alive-progress check tool](img/alive-spinner-check.png) It's awesome if I say so myself, isn't it? And a very complex piece of software I'm proud of, [take a look at its code](alive_progress/animations/spinner_compiler.py) if you'd like. And the check tool is much more powerful! For instance, you can see the codepoints of the frames!!! And maybe have a glimpse of why this version was so so very hard and complex to make... [alive-progress check tool](img/alive-spinner-check-codepoints.png) In red, you see the grapheme clusters, that occupy one or two "logical positions", regardless of their actual sizes... These are the "Cells" of the new Cell Architecture... Look how awesome an Emoji Flag is represented: [alive-progress check tool](img/alive-spinner-check-codepoints-flag.png)... Factories. Create your own bars Customizing bars is nowhere near that involved. Let's say they are "immediate", passive objects. They do not support animations, i.e. they will always generate the extra parameter, a floating-point number between 0 and 1, which is the percentage to render itself. alive_barcalculates this percentage automatically based on the counter and total, but you can send it yourself when in the manualmode! Bars also do not have a Bar Compiler, but they do provide the check tool!! 🎉 [alive-progress check tool](img/alive-bar-check.png) You can even mix and match wide chars and normal chars! Just like spinners do! [alive-progress check tool](img/alive-bar-check-mix.png) Use and abuse the check tools!! They have more goodies waiting for you, there! 👏 And if you want to know even more, exciting stuff lies ahead! Maintaining an open source project is hard and time-consuming. I've put much ❤️ and effort into this. You can back me up with a donation if you've appreciated my work, thank you 😊 Advanced Loop-less use So, you need to monitor a fixed operation, without any loops? It'll work for sure! Here is a naive example (we'll do better in a moment): assumes all steps take the same amount of time, but actually, each one may take a very different time to complete. Think read_file and tokenize may be extremely fast, which makes the percentage skyrocket to 50%, then stopping for a long time in the process step... You get the point, it can ruin the user experience and create a very misleading ETA. To improve upon that you need to distribute the steps' percentages accordingly! Since you told alive_bar there were four steps, when the first one was completed it understood 1/4 or 25% of the whole processing was complete... Thus, you need to measure how long your steps actually take and use the manual mode to increase the bar percentage by the right amount them within a manual mode alive_bar! For example, if the timings you found were 10%, 30%, 20%, and 40%, you'd use 0.1, 0.4, 0.6, and 1.0 (the last one should always be 1.0):. FPS Calibration So, you want to calibrate the engine? The alive-progress bars dozens of tests, and never found one that really inspired that feel of speed I was looking for. The best one seemed to a few quite: [alive-progress calibration](img/alive-calibration.gif) So, if your processing hardly gets to 20 items per second, and you think alive-progressis rendering sluggish, you could increase that sense of speed by calibrating it to let's say 40, and it will be running waaaay faster... It is better to always leave some headroom and calibrate it to something between 50% and 100% more, and then tweak it from there to find the one you like the most! :) The Pause Mechanism Oh, you want to suspend it finds and waiting (potentially a long time) until you can finally start fixing them... You could of course mitigate that by processing in chunks or printing them and acting via another shell or something like that, but those have their own shortcomings... 😓 Now there's a better way! Simply pause the actual detection process for a while! Then you have to wait only until the next fault is found, and act in near real-time! To use the pause mechanism you must be inside a function, to enable the code to yield the items you want to interact with. You most probably already use one in your code, but in the ipython shell, for example, you probably not. So just wrap your code in a function, then just enter a bar.pause() context!! def reconcile_transactions(): qs = Transaction.objects.filter() # django example, or in sqlalchemy: session.query(Transaction).filter() with alive_bar(qs.count()) as bar: for transaction in qs: if not validate(transaction): with bar.pause(): yield transaction bar() That's it! Now call this reconcile_transactions function to instantiate the generator and assign it to some variable, and whenever you want the next broken transaction, call next(gen, None)! The alive-progress bar will start as usual, but as soon as any inconsistency is found, the bar will pause itself and you'll get the prompt back with the transaction! It's almost magic! 😃 In [11]: gen = reconcile_transactions() In [12]: next(gen, None) |█████████████████████ | 105/200 [52%] in 5s (18.8/s, eta: 4s) Out[12]: Transaction<#123> You can then use the usual _ and _12 ipython's shortcuts or just assign it directly with trn = next(gen, None), and you're all set to fix that transaction! When you're done, just reactivate the detection process with the same next as before!! The bar reappears and continues exactly from where it was!! In [21]: next(gen, None) |█████████████████████ | ▁▃▅ 106/200 [52%] in 5s (18.8/s, eta: 4s) Rinse and repeat, until there are no broken transactions anymore. Nice huh 😄 Forcing animations on non-interactive consoles Do those astonishing animations refuse to display? Some terminals occasionally do not report themselves as "interactive", like within shell pipeline commands "|" or background processes. And some never report themselves as interactive, like Pycharm's console and Jupyter notebooks. When a terminal is not interactive, alive-progress disables all kinds of animations, only printing the final receipt. This is made to avoid spamming a log file or messing up a pipe output with thousands of progress bar updates. So, when you know it's safe, you can force enable it and see alive-progress in all its glory! Just use the force_tty argument! with alive_bar(1000, force_tty=True) as bar: for i in range(1000): time.sleep(.01) bar() The values accepted are: force_tty=True-> enables animations, and auto-detects Jupyter Notebooks! force_tty=False-> disables animations, keeping only the final receipt force_tty=None(default) -> auto select, according to the terminal's tty state You can also set it system-wide using the config_handler, then you won't need to pass that manually in all alive_bar calls. Do note that Pycharm's console and Jupyter notebooks are heavily instrumented and thus have more overhead, so the outcome may not be as fluid as you would expect; and on top of that Jupyter notebooks do not support ANSI Escape Codes, so I had to develop some workarounds, to emulate functions like "clear the line" and "clear from cursor". To see alive_baranimations as I intended, always prefer a full-fledged terminal. Interesting facts - This whole project was implemented in functional style; - It uses extensively (and very creatively) Python Closures and Generators, e.g. all spinners are made with cool Generator Expressions! Besides the [spinners](alive_progress/animations/spinners.py) module, the [exhibit](alive_progress/styles/exhibit.py) module and the [spinner_player](alive_progress/animations/utils.py) function are cool examples 😜; - Until 2.0,); - Also, until 2.0). To do - enable multiple simultaneous bars, for nested or multiple activities (this is always requested!) - reset a running bar context, a quantifying mode is expected - dynamic bar width rendition, listening to changes in terminal size (the whole progress-bar line already truncates when needed, according to terminal size) - improve test coverage, currently at 77% branch coverage (but it's hard since it's multi-threaded and includes system hooks) - create a contribsystem, to allow a simple way to share the coolest users' spinners and bars - support colors in spinners and bars (it's very hard, since color codes alter string sizes and it's a mess to synchronize animations then, besides correctly cutting, reversing, and iterating strings while also maintaining colors is very very complex) - any other ideas are welcome! Already done 👍 - jupyter notebook support (experimental at the moment but awesome, since it works the same as in the terminal, animations and everything 😉) - Python versions End of Life notice The alive_progress framework starting from version 2.0 does not support Python 2.7 and 3.5 anymore. If you still need support for them, you can always use the versions 1.x, which are also full-featured and do work very well, just: ❯ pip install -U "alive_progress<2" If you put this version as a dependency in a requirements.txt file, I strongly recommend putting alive_progress<2, as this will always fetch the latest release of the v1.x series. That way, if I ever release a bug fix for it, you will get it the next time you install it. Changelog highlights (complete [here](CHANGELOG.md)): - 2.1.0: Jupyter notebook support (experimental), Jupyter auto-detection, disable feature and configuration - 2.0.0: new system-wide Cell Architecture with grapheme clusters support; super cool spinner compiler and runner; ) - 1.6.2: new bar.current()method; newlines get printed on vanilla Python REPL; the bar is truncated to 80 chars on Windows. - 1.6.1: fix logging support for Python 3.6 and lower; support logging for file; support for wide Unicode chars, which use 2 columns but have length 1 - 1.6.0: soft wrapping support; hiding cursor support; Python logging support; exponential smoothing of ETA time series; proper bar title, always visible; enhanced times representation; new bar.text()method, to set situational messages at any time, without incrementing position (deprecates 'text' parameter in bar()); performance optimizations - 1.5.1: fix compatibility with Python 2.7 (should be the last one, version 2 is in the works, with Python 3 support only) - 1.5.0: standard_bar accepts a backgroundparameter instead of blank, which accepts arbitrarily sized strings and remains fixed in the background, simulating a bar going "over it" - 1.4.4: restructure internal packages; 100% branch coverage of all animations systems, i.e., bars and spinners - 1.4.3: protect configuration system against other errors (length='a' for example); first automated tests, 100% branch coverage of configuration system - 1.4.2: sanitize text input, keeping \n from entering and replicating bar on the screen - 1.4.1: include license file in the source distribution - 1.4.0: print() enrichment can now be disabled (locally and globally), exhibits now have a real-time fps indicator, new exhibit functions show_spinnersand show_bars, new utility print_chars, show_barsgain some advanced demonstrations (try it again!) - 1.3.3: further improve stream compatibility with isatty - 1.3.2: beautifully finalize bar in case of unexpected errors - 1.3.1: fix a subtle race condition that could leave artifacts if ended very fast, flush print buffer when position changes or bar terminates, keep the total argument from unexpected types - 1.3.0: new fps calibration system, support force_tty and manual options in global configuration, multiple increment support in bar handler - 1.2.0: filled blanks bar styles, clean underflow representation of filled blanks - 1.1.1: optional percentage in manual mode - 1.1.0: new manual mode - 1.0.1: pycharm console support with force_tty, improve compatibility with Python stdio streams - 1.0.0: first public release, already very complete and mature License This software is licensed under the MIT License. See the LICENSE file in the top distribution directory for the full license text. Maintaining an open source project is hard and time-consuming. I've put much ❤️ and effort into this. You can back me up with a donation if you've appreciated my work, thank you 😊 *Note that all licence references and agreements mentioned in the alive-progress README section above are relevant to that project's source code only.
https://python.libhunt.com/alive-progress-alternatives
CC-MAIN-2021-49
refinedweb
4,906
62.07
Alright, so the assignment for my entry level c++ class is to write a function isVowel in a program that can identify whether a letter entered by a user is a vowel or not. Here is what I have conjured up however know from looking at it I am very wrong in my approach, can somebody out there send me in the right direction, I cannot figure it out from my text and tutorials. Thanks for your input! Code:#include <iostream> using namespace std; char isVowel(char ch); int main() { char ch; cout << "Please enter a letter from the alphabet: "; cin >> ch; cout<<"the letter you have entered is: isVowel(ch)"; return 0; } char isVowel(char x) { if (char ch = 'a') cout << "A Vowel" <<endl; else cout << "Not a vowel" <<endl; return 1; }
https://cboard.cprogramming.com/cplusplus-programming/57637-help-im-still-learning.html
CC-MAIN-2017-13
refinedweb
133
55.92
Introducing Spock, a testing and specification framework for the JVM Introducing Spock, a testing and specification framework for the JVM Join the DZone community and get the full member experience.Join For Free Get the Edge with a Professional Java IDE. 30-day free trial. Spock is a new testing and specification framework for Java and Groovy developers. When spreading the word about Spock, I often get the question: "Why on earth would I need yet another testing framework? Aren't there enough xUnit/xSpec clones out there already?" In this post, I'll try to show you how Spock is different, and why it is worth a closer look. Assertions Assertions are the most fundamental way to express how a piece of software should behave. It's almost a tradition that every new testing framework comes with its own assertion API, adding a few wrinkles here and there, and naming things a little differently. For example, to say that two things are equal, you would typically write something like: - assertEquals(x, y) - x.shouldEqual(y) - x.mustBe(y) Every now and then, a stranger comes along and asks if he can't just use plain boolean expressions instead of an assertion API. But we all know that boolean expressions can't tell us why they failed. Or can they? Meet Spock! @Speck class Assertions { def "comparing x and y"() { def x = 1 def y = 2 expect: x < y // OK x == y // BOOM! } } In Spock, assertions come after the expect: label, and are just what the stranger asked for - plain boolean expressions! Here is what we get when we run this test: Condition not satisfied: x == y | | | 1 | 2 false Of course this doesn't just work for simple comparisons, but for arbitrary assertions. Here is another, slightly more interesting output: Condition not satisfied: Math.max(a, b) > c | | | | | 112 | 94 | 115 112 false Again, Spock's runtime has collected all relevant information, and presents it in an intuitive way. Nice, isn't it? Data-driven tests Although data-driven testing is a natural extension of state-based testing, it doesn't seem to be used very often. I suppose this is because test frameworks make writing data-driven tests rather hard. All of them? Meet Spock! @Speck class DataDriven { def "maximum of two numbers"() { expect: Math.max(a, b) == c where: a << [7, 4, 9] b << [3, 5, 9] c << [7, 5, 9] } } The expect block provides just the test logic, using the free (i.e. undefined) variables a, b and c as placeholders for data. It's then up to the where block to bind concrete values to these variables. When this test is run, it is repeated three times: first with a = 7, b = 3, and c = 7; then with a = 4, b = 5, and c = 5; and finally with a = b = c = 9. Of course, the test data doesn't have to be baked into the test. For example, let's try to load it from a database instead: @Speck class DataDriven { @Shared sql = Sql.newInstance("jdbc:derby:spockdata", "org.apache.derby.jdbc.EmbeddedDriver") def "maximum of two numbers"() { expect: Math.max(a, b) == c where: row << sql.rows("select * from maxdata") a = row["a"] b = row["b"] c = row["c"] } Here the values for a, b, and c are taken from the equally named columns of the maxdata table, until no more rows are left. Using a syntax similar to Groovy's multi-assignment, the where block can be simplified even further: where: [a, b, c] << sql.rows("select a, b, c from maxdata") Together with the creation of the Sql instance above, we are now down to two lines of code to load our test data from a database. How easy is that? Summary Assertions and data-driven tests are just two areas where Spock achieves more with less. At, you will also meet Spock's incredibly lean mocking framework, its interception-based extension mechanism, and other fascinating life forms. If you have any questions or suggestions, please write us at, or file an issue at. I'm looking forward to your feedback! Peter Niederwieser Founder, Spock Framework Get the Java IDE that understands code & makes developing enjoyable. Level up your code with IntelliJ IDEA. Download the free trial. Published at DZone with permission of Andres Almiray . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/introducing-spock-testing-and
CC-MAIN-2018-43
refinedweb
748
64.41
Nmap Development mailing list archives The change from C++ to Lua [1] has been merged (r12887) into the Nmap trunk. The merge came from the nse-lua-merge [2] branch which is a watered down version of nse-lua [3]. The features removed in the final patch from nse-lua were developed with the intention of demonstrating solutions to known (and some unknown) bugs or problems in NSE. These features were not directly related to the main intent of the branch -- changing the C++ version of NSE to Lua -- and so will be discussed and possibly merged separately. To be specific, the parts of nse-lua which were not merged: o Host Timeout Management -- Charging a host time for execution only when threads are actively working on its behalf. o Coroutine Yields Propagated up to NSE -- When NSE would yield a thread, the yield would properly propagate back to NSE. o NSE API -- A variety of functions in the 'nse' namespace which manipulate NSE or the running threads. o Host and Port Userdata - Lua userdata that would represent Targets and individual Ports on the target. Users should not expect this change to impact their scripts directly; however, NSE does operate differently in a few significant ways: o NSE loads immediately when Nmap starts. It first loads and runs nse_main.lua which will: o Load script arguments. o Set package.path to include the nselib directory. o Load all invariant (across host group scans) chosen by the categories, directories, or script files (--script). Any problems loading scripts will cause NSE to immediately exit before any scanning begins. o NSE now uses the procedures open_nse and close_nse to initialize NSE's state and destroy it. o The procedure nse_restore(lua_State *L, int number) is used to resume a thread that is waiting. It replaces the old process_waiting2running function. L is the thread and number is the amount of values on the stack to be resumed. As noted above, these functional changes do not directly affect script execution but do affect how NSE operates. Users can expect to see NSE perform with less baffling errors and with more meaningful debug output. Because the script engine is now written in Lua, users can also inspect how the engine actually runs their code. Further, because nse_main.lua is recompiled across Nmap invocations, a user can actually change how NSE operates without recompiling Nmap. For example, one could add more debug output when developing scripts among other endless possibilities. Please post here if you have any questions concerning this new implementation in this thread. [1] [2] svn://svn.insecure.org/nmap-exp/patrick/nse-lua-merge [3] svn://svn.insecure.org/nmap-exp/patrick/nse-lua -- -Patrick Donnelly "One of the lessons of history is that nothing is often a good thing to do and always a clever thing to say." -Will Durant _______________________________________________ Sent through the nmap-dev mailing list Archived at By Date By Thread
http://seclists.org/nmap-dev/2009/q2/90
CC-MAIN-2014-42
refinedweb
491
64.3
Details - Type: New Feature - Status: Resolved - Priority: Major - Resolution: Fixed - Component/s: emma-plugin - Labels:None - Similar Issues: Description JaCoCo is the successor of emma. Maybe it is possible to support the new coverage file format. Attachments Activity Attached an xslt Stylesheet which converts the jacoco xml into emma-plugin compatible xml. Its not perfect, but at least, you can include jacoco into jenkins builds now without getting errors. Not all coverage metrics could be mapped 1:1. Also Method names are not perfect, but for me its sufficent. Use the stylesheet in your ant-buildfile like: <jacoco:report> bla bla <xml destfile="${build.dest}/test-reports/coverage_jacoco.xml" /> </jacoco:report> <xslt style="${build.installer.dir}/coverage/jacoco_to_emma.xslt" in="${build.dest}/test-reports/coverage_jacoco.xml" out="${build.dest}/test-reports/coverage.xml" > <!-- saxon or some other xslt2/xpath2 processor is required --> <classpath> <pathelement location="${top.dir}/lib/saxon9he.jar"/> </classpath> </xslt> Of corse, the "group" settings from jacoco will be lost, since emma does not support this. Thank you for the XSLT. I could get it running. However, the resulting coverage.xml only contains a summary (about 500 bytes file size) - even after upgrading JaCoCo from 0.5.2 to 0.5.7. Some numbers differ slightly between the coverage.xml and the JaCoCo HTML Report, like e.g. the covered classes count. But this is sufficient to get the jenkins emma plug-in working. So, now I can see the trend. Thank you very much. Hi Markus, when I try this, I get [xslt] : Error! Error checking type of the expression 'funcall(replace, [step("attribute", 22), literal-e xpr , literal-expr(.)])'. I suspect some XSLT 1.0 vs 2.0, ant jar or some such issue (some sites indicate a fn: namespace), I'm using the saxon9he.jar from your example. Any ideas? Hi there, I forked the Emma plugin back in October, and got something that more-or-less works for JaCoCo. It does need a few specific fixes: - JaCoCo collects a slightly different set of metrics than Emma does, and the table headings in the template need to be updated - the source file line-by-line coverage HTML files have to be included in the produced report - a few tests from the original Emma plugin are still failing (I updated most of them already, but some will are invalid for JaCoCo) You can take a look at the current state of it here: Help is very welcome! On a side note, this topic was suggested as a subject for a GSoC by JBoss (Jonathan is the mentor) Any help of finding and motivating a student for to work on this in the GSoC context is welcome! EclEmma plugin for Jacoco reports in Eclipse is great. I guess they have a cool API to parse jacoco.exec files and a cleaner way than turning them into Emma reports with XSL. Here's some very humble beginnings for a start:.
https://issues.jenkins-ci.org/browse/JENKINS-10835?focusedCommentId=158127&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2016-44
refinedweb
492
58.58
Text Generation using Tensorflow, Keras and LSTM Automatic Text Generation. The dataset is available here. LSTM -. Here we are importing the necessary libraries:- - We have used a command to select the tensorflow version as 2.x - We have imported tensorflowto build the model. - We have imported stringto get set of punctuations. - We have imported requeststo get the data file in the notebook. %tensorflow_version 2.x import tensorflow as tf import string import requests The get() method sends a GET request to the specified url. Here we are sending a request to get the text document of the data. response = requests.get('') Now we will display some part of the text returned by requests.get(). response.text[:1500] 'This is the 100th Etext file presented by Project Gutenberg, and\nis presented in cooperation with World Library, Inc., from their\nLibrary of the Future and Shakespeare CDROMS. Project Gutenberg\noften releases Etexts that are NOT placed in the Public Domain!!\n\nShakespeare\n\n*This Etext has certain copyright implications you should read!*\n\n<<THIS ELECTRONIC VERSION OF THE COMPLETE WORKS OF WILLIAM\nSHAKESPEARE IS COPYRIGHT 1990-1993 BY WORLD LIBRARY, INC., AND IS\nPROVIDED BY PROJECT GUTENBERG ETEXT OF ILLINOIS BENEDICTINE COLLEGE\nWITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE\nDISTRIBUTED SO LONG AS SUCH COPIES (1) ARE FOR YOUR OR OTHERS\nPERSONAL USE ONLY, AND (2) ARE NOT DISTRIBUTED OR USED\nCOMMERCIALLY. PROHIBITED COMMERCIAL DISTRIBUTION INCLUDES BY ANY\nSERVICE THAT CHARGES FOR DOWNLOAD TIME OR FOR MEMBERSHIP.>>\n\n*Project Gutenberg is proud to cooperate with The World Library*\nin the presentation of The Complete Works of William Shakespeare\nfor your reading for education and entertainment. HOWEVER, THIS\nIS NEITHER SHAREWARE NOR PUBLIC DOMAIN. . .AND UNDER THE LIBRARY\nOF THE FUTURE CONDITIONS OF THIS PRESENTATION. . .NO CHARGES MAY\nBE MADE FOR *ANY* ACCESS TO THIS MATERIAL. YOU ARE ENCOURAGED!!\nTO GIVE IT AWAY TO ANYONE YOU LIKE, BUT NO CHARGES ARE ALLOWED!!\n\n\n**Welcome To The World of Free Plain Vanilla Electronic Texts**\n\n**Etexts Readable By Both Humans and By Computers, Since 1971**\n\n*These Etexts Prepared By Hundreds of Volunteers and Donations*\n\nInforma' You can see the character \n in the text. \n means “newline”. Now we are going to split the text with respect to \n. data = response.text.split('\n') data[0] 'This is the 100th Etext file presented by Project Gutenberg, and' The text file contains a header file before the actual data begins. The actual data begins from line 253. So we are going to slice the data and retain everything from line 253 onwards. data = data[253:] data[0] ' From fairest creatures we desire increase,' The total number of lines in our data is 124204. len(data) 124204 Right now we have a list of the lines in the data. Now we are going to join all the lines and create a long string consisting of the data in continuous format. data = " ".join(data) data[:1000] "" You can see that the data consists of various punctuation marks. We are going to create a function clean_text() to remove all the punctuation marks and special characters from the data. We will split the data according to space character and separate each word using split(). maketrans() function is used to construct the transition table i.e specify the list of characters that need to be replaced in the whole string or the characters that need to be deleted from the string. The first parameter specifies the list of characters that need to be replaced, the second parameter specifies the list of characters with which the characters need to be replaced, the third parameter specifies the list of characters that needs to be deleted.It returns the translation table which specifies the conversions that can be used by translate(). string.punctuation is a pre-initialized string used as string constant which will give all the sets of punctuation. To translate the characters in the string translate() is used to make the translations. This function uses the translation mapping specified using the maketrans(). The isalpha() method returns True if all the characters are alphabet letters (a-z). The lower() methods returns the lowercased string from the given string. We can see that after passing data to clean_text() we get the data in the required format without punctuations and special characters. def clean_text(doc): tokens = doc.split() table = str.maketrans('', '', string.punctuation) tokens = [w.translate(table) for w in tokens] tokens = [word for word in tokens if word.isalpha()] tokens = [word.lower() for word in tokens] return tokens tokens = clean_text(data) print(tokens[:50]) ['] The total number of words are 898199. len(tokens) 898199 The total number of unique words are 27956. len(set(tokens)) 27956 As discussed before we are going to use a set of previous words to predict the next word in the sentence. To be precise we are going to use a set of 50 words to predict the 51st word. Hence we are going to divide our data in chunks of 51 words and at the last we will separate the last word from every line. We are going to limit our dataset to 200000 words. length = 50 + 1 lines = [] for i in range(length, len(tokens)): seq = tokens[i-length:i] line = ' '.join(seq) lines.append(line) if i > 200000: break print(len(lines)) 199951 Now we will see the first line consisting of 51 words. lines[0] ' The 51st word in this line is 'self' which will the output word used for prediction. tokens[50] 'self' This is the second line consisting of 51 words. As you can see we have hopped by one word. The 51st word in this line is 'thy' which will the output word used for prediction. lines thy' Build LSTM Model and Prepare X and y Here we have imported all the necessary libraries used to pre-process the data and create the layers of the neural network. We are going to create a unique numerical token for each unique word in the dataset. fit_on_texts() updates internal vocabulary based on a list of texts. texts_to_sequences() transforms each text in texts to a sequence of integers. tokenizer = Tokenizer() tokenizer.fit_on_texts(lines) sequences = tokenizer.texts_to_sequences(lines) sequences containes a list of integer values created by tokenizer. Each line in sequences has 51 words. Now we will split each line such that the first 50 words are in X and the last word is in y. sequences = np.array(sequences) X, y = sequences[:, :-1], sequences[:,-1] X[0] array([ 47, 1408, 1264, 37, 451, 1406, 9, 2766, 1158, 1213, 171, 132, 269, 20, 24, 1, 4782, 87, 30, 98, 4781, 18, 715, 1263, 171, 211, 18, 829, 20, 27, 3807, 4, 214, 121, 1212, 153, 13004, 31, 2765, 1847, 16, 13003, 13002, 754, 7, 3806, 99, 2430, 466, 31]) vocab_size contains all the uniques words in the dataset. tokenizer.word_index gives the mapping of each unique word to its numerical equivalent. Hence len() of tokenizer.word_index gives the vocab_size. vocab_size = len(tokenizer.word_index) + 1 to_categorical() converts a class vector (integers) to binary class matrix. num_classes is the total number of classes which is vocab_size. y = to_categorical(y, num_classes=vocab_size) The length of each sequence in X is 50. seq_length = X.shape[1] seq_length 50 LSTM Model A Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor. Embedding layer: The Embedding layer is initialized with random weights and will learn an embedding for all of the words in the training dataset. It requires 3 arguments: input_dim: This is the size of the vocabulary in the text data which is vocab_sizein this case. output_dim: This is the size of the vector space in which words will be embedded. It defines the size of the output vectors from this layer for each word. input_length: Length of input sequences which is seq_length. LSTM layer: This is the main layer of the model. It learns long-term dependencies between time steps in time series and sequence data. return_sequence when set to True returns the full sequence as the output. Dense layer: Dense layer is the regular deeply connected neural network layer. It is the most common and frequently used layer. The rectified linear activation function or relu for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. The last layer is also a dense layer with 13009 neurons because we have to predict the probabilties of 13009 words. The activation function used is softmax. Softmax converts a real vector to a vector of categorical probabilities. The elements of the output vector are in range (0, 1) and sum to 1. , 50, 50) 650450 _________________________________________________________________ lstm (LSTM) (None, 50, 100) 60400 _________________________________________________________________ lstm_1 (LSTM) (None, 100) 80400 _________________________________________________________________ dense (Dense) (None, 100) 10100 _________________________________________________________________ dense_1 (Dense) (None, 13009) 1313909 ================================================================= Total params: 2,115,259 Trainable params: 2,115,259 Non-trainable params: 0 _________________________________________________________________ model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) After compiling the model we will now train the model using model.fit() on the training dataset. We will use 100 epochs to train the model. An epoch is an iteration over the entire x and y data provided. batch_size is the number of samples per gradient update i.e. the weights will be updates after 256 training examples. model.fit(X, y, batch_size = 256, epochs = 100) Epoch 95/100 199951/199951 [==============================] - 21s 103us/sample - loss: 2.4903 - accuracy: 0.4476 Epoch 96/100 199951/199951 [==============================] - 21s 104us/sample - loss: 2.4770 - accuracy: 0.4497 Epoch 97/100 199951/199951 [==============================] - 21s 106us/sample - loss: 2.4643 - accuracy: 0.4522 Epoch 98/100 199951/199951 [==============================] - 21s 105us/sample - loss: 2.4519 - accuracy: 0.4530 Epoch 99/100 199951/199951 [==============================] - 21s 105us/sample - loss: 2.4341 - accuracy: 0.4562 Epoch 100/100 199951/199951 [==============================] - 21s 105us/sample - loss: 2.4204 - accuracy: 0.4603 We are now going to generate words using the model. For this we need a set of 50 words to predict the 51st word. So we are taking a random line. seed_text=lines[12343] seed_text ' generate_text_seq() generates n_words number of words after the given seed_text. We are going to pre-process the seed_text before predicting. We are going to encode the seed_text using the same encoding used for encoding the training data. Then we are going to convert the seed_textto 50 words by using pad_sequences(). Now we will predict using model.predict_classes(). After that we will search the word in tokenizer using the index in y_predict. Finally we will append the predicted word to seed_text and text and repeat the process. def generate_text_seq(model, tokenizer, text_seq_length, seed_text, n_words): text = [] for _ in range(n_words): encoded = tokenizer.texts_to_sequences([seed_text])[0] encoded = pad_sequences([encoded], maxlen = text_seq_length, truncating='pre') y_predict = model.predict_classes(encoded) predicted_word = '' for word, index in tokenizer.word_index.items(): if index == y_predict: predicted_word = word break seed_text = seed_text + ' ' + predicted_word text.append(predicted_word) return ' '.join(text) We can see that the next 100 words are predicted by the model for the seed_text. generate_text_seq(model, tokenizer, seq_length, seed_text, 100) 'preposterously be stained to leave for them when thou art covetous and we pour their natural fortune grace the other fool as a monkey i cannot weep and ends you clown nay ay my lord ham aside the queen quoth thou into sport angelo to th capitol brutus patience peace night i am returnd to th field o th gout and a man that murdred of the greatest rout of her particular occasions that outruns the world side corioli did my performance gone cut the offender and take thee to you that dare not go and show me to be' We have got a accuracy of 46%. To increase the accuracy we can increase the number of epochs or we can consider the entire data for training. For this model we have only considered 1/4th of the data for training.
https://kgptalkie.com/text-generation-using-tensorflow-keras-and-lstm/
CC-MAIN-2021-17
refinedweb
2,005
65.73
Last. In Part 1 of this series, we created an e-commerce site that exposed three types of URLs: We handled these URLs by creating a "ProductsController" class like below:. The ASP.NET MVC framework includes a flexible URL routing system that enables you to define URL mapping rules within your applications. The routing system has two main purposes:.: Or by taking advantage of the new object initializer feature in the VS 2008 C# and VB compilers to set the properties more tersely:. Let's walk through using some custom routing rules in a real world scenario. We'll do this by implementing search functionality for our e-commerce site. We'll start by adding a new SearchController class to our project:: And with that we now have "pretty URL" searching for our site (all that remains is to implement the search algorithm - which I will leave as an exercise to the reader <g>).: If we pass in a URL like /Products/Detail/12 to our application, the above routing rule will be valid. If we pass in /Products/Detail/abc or /Products/Detail/23232323232 it will not match. Earlier in this blog post I said that the URL routing system in the ASP.NET MVC Framework was responsible for two things:: automatically pick up the special Search results route rule we configured earlier in this post, and the "href" attribute they generate automatically reflect this:: would use the URL routing system to return the below raw URL (not wrapped in a <a href=""> element):: Can also be written as:: Instead it just returns this HTML hyperlink: When this hyperlink is clicked by an end-user it will then send back a http request to the server that will invoke the SearchController's Results action method. Any idea when this framework will be available for download? Cheers Harry Thanks Scott. Nice Tutorial indeed. Counting days to lay my hands on the first MVC CTP. BTW, can you explain in short about the Active Record Type support in our MVC. Are we having such type of facility like " Active Record " as found in Ruby on Rails or we are going to have something similar to it for managing databases with ORM support. Thanks You've been kicked (a good thing) - Trackback from DotNetKicks.com Excellent post ScottGu! Love your work. Question : What is this MockHttpContext in the UnitTest class? I mean, i can guess what it is .. but is it new in .NET 3.5 or the MVC framework? -PK, from Melbourne, Australia- Hi Scott, This looks very powerful - I'm looking forward to playing with the CTP. A couple of questions: 1) Is it possible to use routes with a file extension if I don't want to enable wildcard script mappings? For example, /Products/List/Beverages.rails or /Products/Detail/34.rails ? 2) Hypothetically, do you think it would be possible to customise the route creation process so that route data could be gathered from attributes on actions? For example: public class SearchController : Controller { [ControllerAction, Route("Search/[query]")] public void Results(string query) { ... } } Jeremy All good stuff! Any news on when we can get our hands on the CTP? Great Going! Are there any chances of Converting one of the existing Starter kits in a modified MVC way,with all new tools and Controls. This will be of great advantage. Just my few cents Pingback from Charles Mark Carroll Blog » Blog Archive » Scott Guthrie reveals more ASP.net MVC Framework Goodies More in-depth details about the ASP.NET MVC pipeline (pre-CTP1) are available at: blog.codeville.net/.../aspnet-mvc-pipeline-lifecycle Using lambdas with ActionLink - slick. And a great example of less-obvious uses for expressions. Pingback from AJAX coding school » Blog Archive » AJAX Examples [2007-12-03 12:01:46]. Will there be a provider for loading routes from XML or a database? Or should a controller be able to do a lookup to find the exact controller? Scott, It appears that ASP.NET can register http handlers at runtime now and that there is no need for an extension in a URL. Is this true for ASP.NET 3.5 or just the MVC framework? i.e. is there a way to have URLs with no extensions in a plain old ASP.NET 3.5 application without IIS7? Thanks, Dave Can't wait to try MVC Scott!... Anyway, gives us a CTP soon!! :) Zend fan ? :) Pingback from In depth: The ASP.NET MVC Pipeline « codeville Make life easier use URLRewriting @ I looked at it in the beginning this year. Does all the above by changing tags in web.config Had a problem with postback which they might have fixed. Most of your needs will be covered. Hi Scott, As for the validation rules, you only mentioned that it will match or not match based on the regular expression - how would a developer validate that? Is there an IsValid() method call or would the non-match throw an exception to be handled? Is it possible to use IronRuby as the coding language to target the MVC framework?? Looks great Scott, any more news on when it will be released? Pingback from ASP.NET MVC Framework (Part 2): URL Routing » article » Thats The New Thing! cooooooooooool! Thanks for spending all these long hours on creating such great tools...2:44 AM, I don't see how you do it. Todd Isn't there going to be a url rewriting module for web forms soon?. Great article! Two quick questions: Is there a version of this available yet? And can I use it with VS2005? Thank you!! Great to see the url routing offering all this functionality. I really like the validation and testing and can't wait to give it try. Looks great! Any guess as to when the preview release will be available for download? Could you do something wrt error handling? What would be best practice if the controller encounters an error? Or the view? Or the route doesn't exist (or fails it's validation)? Its all cool, Scott is the prodigy no doubt about it? But I am not seeing any Innovations,we are just tryin 2 accomplish it in a more smart manner, though the basic achievment is the same, app still sits on IIS and people browse it! We need some cut thru Innovation.... Can't wait for the next article !!!! Please make it soon. Just a request Why dont you and Nikhil Kothari Synchronize your MVC posts ?. We are starting a new project here in BBC Worldwide (UK) using MVC. I've managed to partly get the guys excited to use your MVC rather than our own, to be an early adopter. If it matters to you, I'm sure an encouraging/support email from you will of great help to get the team excited here. Paymon Paymon. k hamooshi [ at ] b b c.co.uk Will there be any way to use an XML document to create the routing rules outside of the Global.asax code? Our sites are written for may different customers and each customer sometimes requires a different set of routing rules. Our current system is a MVC controller that we wrote internally but we would like to move towards the Microsoft MVC controller. We currently use an XML document for each customer to define the routing rules between actions and views. This is an important feature to us to be able to support our customers. I am excited for the release of this code. Mike Thank you Scott, A couple of questions: Can this handle the Url's scheme for Constructing Outgoing URLs (https or http) so that we can force a certain protocol with an absolute url in some cases. In the case when an application is mapped to multiple domains and aliases, can the route take the domain in consideration. application.com/ maps to home controller, index action application.com/slug1 maps to home controller, index action, slug=slug1 client1.com/ maps to home controller, index action, slug=slug1 @jeffreyabecker I'm not sure that driving authorization from routes is a good idea because there may be more than one route to a specific controller and controller action. It wouldn't be unheard of. It seems to me granting access at the controller and/or action level makes the most sense. This looks really cool!? Scott i have been following asp.net MVC from the start...its great !!! waitin Is this going to require IIS 7? Or will it work in IIS 6 but you have to map all requests to the ASP.NET engine? This is great stuff ... BUT I'd like to see some definitive answers to the following questions: 1. What is the difference between MS MVC and Castle's Monorail? 2. Why would I choose one over the other? Others have asked the same question though I've been unable to track down any real substantive answers. thanks - wayde I am very excited to start using this. The only thing that smells funny is using the lambda expression to package the information needed to determine the routing rule in the generic Html.ActionLink<T>() method. I understand it provides an easy way to do type checking and also intellisense, but it seems strange to see what looks like a method call on an instantiated object be passed as a parameter package. I personally prefer the non-generic Html.ActionLink() method of passing the information because it's clear what is going on. I understand the cool factor of the generic call, but I just feel it might be too much. Am I alone in this thinking? Excellent Post Scott, love the unit testing example to wrap things up. I can see that the routes table could grow rather large for complex applications. Is there a certain level where you could envision the routes table becoming so large that performance would start to be effected? Have you thought about being able to define a regular expression for url matching and using backreferences or named captures as the tokenized url? I think this would allow for much more flexibility while keeping the list of routing rules down to a minimum. Great post (as always), Why do we need to programmatically write the routes and not use a configuration based one, seems very trivial ? In general I prefer to use as minimum code as possible and based everything on configuration and in such way be extendible and bring fix to productions site in an easier way.. Rad. Can't wait to play around with the bits. Thanks Scott, Jon Wow, looking very very good.. I was really happy when I saw the lambda expressions as an alternative.. I was wondering though, could you use lambda expressions in the routing table to define the target method, or would this be damaging to the unit testing and interchangability of the system? If it could be done as an alternative it would be nice for the sake of absolete refactoring.. even if it's common to change the method names (given that the actual urls could be changed without the method names needing to change). Pingback from rascunho » Blog Archive » links for 2007-12-03 Thanks for helpful information. Can you please write someday about How Data Access will work with MVC framework. I read some somewhere that you are going to support all major Data Access Application Framework in MVC. But what would be the best choice? Can you provide us help for Exception Handling in MVC. Yash Patel Can you do an example of an non-web applicaton using the exact same controller model ? for instance, maybe a windows forms app or an o/s service ? Looking to centralize all my logic and swap in/out views as necessary. Pingback from Wöchentliche Rundablage: ASP.NET MVC, Visual Studio 2008, .NET 3.5, Silverlight, SubSonic, WPF… | Code-Inside Blog I can't wait for this to be released! This is all fine and good but the ability to do this has been around for ages. This is just an adoption of the MVP pattern endoursed by Fowler. We have been using MVC / MVP in all our .net app's for ages we just code it up - if you look at the Gang Of Four patterns they give an excellent implementation of MVC with out all the behind the scenes nonesence. I just think its kind of funny when people get all worked up over something thats been around for ages when they think it the latest and greatest... I'm not trying to be a party crasher as patterns are a wonderful thing and add great structure to any project - but I'd recommend implmenting yourself so you are not suck into the MS way of doing things. Hallo Scott, Thanks for the informative post - you really should encode your images as PNG instead as of JPG - the compression artifacts and jaggies are distracting Great post, Scott! I'm eagerly awaiting the preview. I'm assuming that you can route several different URLs to the same controller/action. This would be useful if you updated your URL scheme, but didn't want to break the old URLs. How would the reverse lookup work in this case though? If I had two URLs map to one action, what would Url.Action() return? Couple questions: 1. As others have mentioned, please implement a parallel web.config-based mechanism. With Forms Authentication you locked us into web.config, now you're locking us into code-based. Please find the (correct) middle ground and let us choose. 2. Are application-relative paths (~/[controller]) valid? You wouldn't require all sites using MVC to be the root site for the web server, right? Chris Will routes be able to redirects based on Sub-Domains? For example, acme.noname.com with route to /company/[name] @haacked Agreed. That's actually what I was trying to get at via the routes. So I guess my question is better asked as: At what point in the HttpApplication cycle is the route resolved into a controller, action & params and is that result exposed in any way to HttpApplication? My current permissions schema grants at the Action / Business Entity level and we're having to go through all sorts of machinations to enforce that. Assuming that Business Entity => Controller this would make things very simple. Maybe it’s silly question, but how do you resolve this URL: /Products/List/Grains/Cereals -Albert Brandon Bloom wrote: "I noticed that in Django, you have to repeat yourself kind of often when you have a deep nested hierarchy of pages." Brandon -- I'm pretty sure you're looking for Django's "include()" function, which lets you nest URLconfs inside one another to solve that problem. I don't know whether something like this is available for this ASP.NET framework, but it's the kind of thing that's pretty easy to add. you know i'll waiting your next Part of MVC Framework... how to get the number of days by subtracting a date from a date? i m working on a project where i have to give check in date and check out date and after subtraction it will display the number of days. please let me know how to do this? I am new in asp.net with vb.net platform.so kindly help me out. plz give my problem highest priority. your reply is eargly awaited Hi Harry, >>>>>> Any idea when this framework will be available for download? We are planning to release it the end of this week. Hi Softmind, >>>>>>>. Hi PK, >>>>>>). Hi Jeremy, >>>>>>. Hi Neil, >>>>>> All good stuff! Any news on when we can get our hands on the CTP? Later this week! :-) Hi Fredrik, >>>>>> Are there any chances of Converting one of the existing Starter kits in a modified MVC way,with all new tools and Controls. Good question. We are definitely planning on shipping starter samples - I'm not sure if we've looked yet at actually converting an existing app to use it. That is a good idea. Hi Mike, >>>>>>>>. You could do a couple of approaches. One would be to register your specific URL routes first (for example: for login, home, admin pages), and then basically do a * route rule to handle all other URLs. You could map these to a CMS content controller that was then responsible for fetching the appropriate content from the database/filesystem and displaying it. We'll have richer support in our next preview for handling * mapping scenairos (including the ability to-do * mapping of sub-URL namespaces). But I think the support for what you are after might even work with the first preview release. Hi Jan, >>>>>>>>... It is a good question - and one we looked at a little. The challange is that with route rules you often want control over the order of "who wins when there is a conflict". By embedding each routing rule on specific controllers, it becomes harder to specify ordering. That is why we went with the routetable approach (where there is ordering control). Hi Deepak, >>>>>> Make life easier use URLRewriting @ For more on URL rewriting and ASP.NET I also recommend this previous post I've done: weblogs.asp.net/.../tip-trick-url-rewriting-with-asp-net.aspx Hi Chris, >>>>>> As for the validation rules, you only mentioned that it will match or not match based on the regular expression - how would a developer validate that? Is there an IsValid() method call or would the non-match throw an exception to be handled? There isn't currently a custom IsValid method with the first preview release. This is a scenario we are looking to potentially enable with our extensible Route types - where you could sub-class a Route object and potentially have more control for scenarios like this. Hi Ahmad, >>>>> Is it possible to use IronRuby as the coding language to target the MVC framework?? Yes - we'll support both IronRuby and IronPython with the ASP.NET MVC Framework. Hi Matthew, >>>>>>>. We don't currently have a pre-built file format for reading in mapping rules. But it should be relatively easy to build (you could even use LINQ to XML to read in and construct the rules). Hi eponymous, >>>>>>> Two quick questions: Is there a version of this available yet? And can I use it with VS2005? We'll have the first public preview of it available later this week. It currently requires VS 2008 - since it uses some new features in .NET 3.5. Hi Morrijr, >>>>>>>> Could you do something wrt error handling? What would be best practice if the controller encounters an error? Or the view? Or the route doesn't exist (or fails it's validation)? I'll probably do a blog post on this in the future. There are several ways that you can handle errors. The controller has an OnError method you can override to handle controller-specific errors, and you can use the standard Application_OnError event to handle global errors. Within your controller action you can then handle unhandled view exceptions using a local try/catch.
http://weblogs.asp.net/scottgu/archive/2007/12/03/asp-net-mvc-framework-part-2-url-routing.aspx
crawl-001
refinedweb
3,208
74.08
Django: Tear down and re-sync the database Django includes the useful management command syncdb for creating the database tables and columns used by your application. If you add new tables (model classes) then re-running syncdb will add them for you. Unfortunately if you modify columns of existing tables, or add new columns, then syncdb isn't man enough for the job. For modifying the schema of production systems migrations are the way to go. I played a bit with South for Django, which is pretty straightforward. For a system still in development, and changing rapidly, migrations are overkill. We have a script for populating the database with test data, which we update as the schema evolves. (In parallel with this we have a script that imports the original data from the legacy application we are replacing - again updating the script as our app is capable of handling more of the original schema.) For development what we really want to do is to tear down our development database and re-run syncdb. Running syncdb requires manual input, to create a superuser, so preferably we want to disable this so that the whole process can be automated. I found various recipes online to do this, but mostly using an obsolete technique to disable superuser creation. In the end I used a combination of this recipe to programatically clear the databases (using the sql generated by sqlclear) and this recipe to disable super user creation. Note The code also skips clearing the authentication table as we are using Django authentication unmodified. Comment out the line that does this if you aren't using Django authentication or want to clear it anyway. #!/usr/bin/env python import os import sys import StringIO import settings from django.core.management import setup_environ, call_command setup_environ(settings) from django.db import connection from django.db.models import get_apps, signals app_labels = [app.__name__.split('.')[-2] for app in get_apps()] # Skip clearing the users table app_labels.remove('auth') sys.stdout = buffer = StringIO.StringIO() call_command('sqlclear', *app_labels) sys.stdout = sys.__stdout__ queries = buffer.getvalue().split(';')[1:-2] cursor = connection.cursor() for query in queries: cursor.execute(query.strip()) from django.db.models import signals from django.contrib.auth.management import create_superuser from django.contrib.auth import models as auth_app # Prevent interactive question about wanting a superuser created. signals.post_syncdb.disconnect( create_superuser, sender=auth_app, dispatch_uid = "django.contrib.auth.management.create_superuser") call_command('syncdb') It wasn't all plain sailing. We're using MySQL (God help us) and our development machines are all running Mac OS X. On Mac OS X MySQL identifiers, including table names, are case insensitive. Whilst I would object strongly to a case sensitive programming language this actually makes working at the sql console slightly less annoying so it isn't a problem in itself. We define our data model using standard Django model classes: from django.db import models class NewTableName(models.Model): NewColumnName = models.CharField(max_length=255, db_column="OriginalSpaltennamen") class Meta: db_table = 'UrsprunglichenTabellennamen' The Meta.db_table specifies the table name that will actually be used in the database. We use the original table and column names where possible as the end users will have to modify some existing tools to work with the new system and this minimizes the changes. As you can see both the original table and new table names are mixed case. For some reason, which I never got to the bottom of, where the model classes have foreign key relationships syncdb will create these tables with all lowercase names. This could be Django, MySQL or the Python connector to MySQL (or any combination of these) and I never worked out why. Unfortunately sqlclear will only generate sql to drop tables where the casing specified in the model exactly matches the casing in the database. I worked round it by changing all our Meta.db_table entries to be all lowercase. Not what you would call ideal but acceptable. Now everytime we update our database schema we can simply run this script. It drops all existing tables and then re-creates them with all the changes. Note Carl Meyer suggests using call_command('syncdb', interactive=False) instead of the signals.post_syncdb.disconnect code. It's certainly shorter but I haven't tried it yet. In the comments Stavros Korokithakis points out that the reset admin command will reset individual apps and regenerate them. If you have several apps in a project this script is still simpler, but if you only need to reset one then you might as well just use ./manage.py reset <appname>. It takes the --no-input switch if you want to supress the user prompts. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2009-12-27 00:06:13 | | Categories: Python, Work, Tools Tags: django, sql, mysql, databases More Changes to Mock: Mocking Magic Methods I recently released Mock 0.4.0, my test support library for mocking objects and monkeypatching. It's been gratifying and surprising to get so many emails from people using it. - How To Test Django Template Tags - A presentation on Unit Testing with Mock - Mocking with Django and Google AppEngine I originally wrote Mock to simplify some of the testing patterns we use at Resolver Systems, and we use a forked version [1] there. Mock has greatly improved the readability of our tests whilst reducing the number of lines of code needed. Some improvements made at Resolver Systems (like assert_called_with and the single argument form of patch) have fed back into the public version. The obvious lack in Mock as it stands at 0.4.0 is its inability to mock magic methods. This means that you can't use it to mock out any of the built-in container classes, or classes that implement protocol methods like __getitem__ and __setitem__. The main reason that it didn't support them was the lack of a clear and obvious way to do it. At Resolver Systems we've bolted on support for a few of the magic methods as we've needed them; unfortunately in an ad-hoc and inconsistent manner. My requirements for protocol support in Mock (which I think I've now met) were: - A clean, simple and consistent API - Being able to meet most use cases without having to meet all of them - Not unconditionally adding a large number of new attributes to mock objects [2] - Mocks shouldn't support all the protocol methods by default as this can break duck-typing - It mustn't place any additional burden on developers not using them The solution I've now come up with is implemented in the SVN repository, and will become Mock 0.5.0. So far it is only capable of mocking containers and I'd like feedback as to whether this is going to meet people's use cases. If you are interested in this, please try the latest version and let me know what you think: Documentation for the new features is not done, but it is all tested so you can check 'mocktest.py' and 'patchtest.py' for examples of how they work. The implementation uses a new class factory called MakeMock. This takes a list of strings specifying the magic methods (without the double underscores for brevity!) you want your mock objects to have - it returns a subclass of Mock that only has the magic methods you asked for. For the container methods, the Mock class takes a keyword argument items that can either be a dictionary or a sequence. This is stored as the _items attribute (that only exists on mocks with container methods), defaulting to an empty dictionary, and can be any mapping or sequence object. The container methods delegate to this, and all calls are recorded normally in method_calls. >>> MagicMock = MakeMock('getitem setitem'.split()) >>> mock = MagicMock(items={'a': 1, 'b': 2}) >>> mock['a'] 1 >>> mock.method_calls [('__getitem__', ('a',), {})] >>> mock['c'] = 10 >>> mock._items {'a': 1, 'c': 10, 'b': 2} There is an additional bit of magic. When you instantiate a mock object normally (using Mock(...)), you can use the 'magics' or 'spec' keyword arguments to actually get back an instance with magic methods support. The spec keyword argument takes a class that you want your Mock to imitate, and accessing methods not on the actual class will raise an AttributeError. When you instantiate a mock object with a spec keyword argument, the constructor will check if the spec class has any supported magic methods; if it does you will actually get an instance of a mock that has these magic methods. The magics keyword argument is new, and lets you specify which magic methods you want: >>> mock = Mock(magics='getitem contains') >>> 'hi' in mock False >>> mock['hi'] = 'world' Traceback (most recent call last): ... TypeError: 'MagicMock' object does not support item assignment >>> mock._items['hi'] = 'world' >>> 'hi' in mock True >>> type(mock) <class 'mock.MagicMock'> >>> mock.method_calls [('__contains__', ('hi',), {}), ('__contains__', ('hi',), {})] Note that the magics keyword takes a string and does the split for you. As I implement more magic methods I'll provide shortcuts for obtaining instances / classes that have all the container methods, or all the numeric methods, or just everything. The patch decorator also now take spec and magics keyword argument, but that's not as useful as it sounds. You will usually be using patch to mock out a class, so you'll still need to set the return value to be a mock instance with the methods you want. For comparison methods I'll allow you to provide a 'value' object that all comparisons delegate to. This can also be used for hashing, in-place, right hand side and unary operations. That doesn't leave much left to cover (descriptors and context management protocol methods - but I'm not sure how much demand there will be for mocking these.) Protocol methods supported currently are: - __getitem__ - __setitem__ - __delitem__ - __iter__ - __len__ - __contains__ - __nonzero__ The list of changes that are currently in Mock 0.5.0: Mock has a new 'magics' keyword arguments - a list (or string separated by whitespace) of magic methods that the Mock instance should provide (only container methods available so far). Mock has an 'items' keyword argument for mocks implementing container methods. The methods keyword argument to Mock has been removed and merged with spec. The spec argument can now be a list of methods or an object to take the spec from. patch and patch_object now take magics and spec keyword arguments - TODO: verify - Nested patches may now be applied in a different order (created mocks passed in the opposite order). This is actually a bugfix. MakeMock is a Mock class factory. It takes a list of magic methods and creates a MagicMock class (Mock subclass) with the magic methods you specified. Currently only container methods are available. Instantiating a Mock with magics / spec will actually create a MagicMock with magic methods from the method / spec list. There are still a few things left to do (apart from implementing support for all the other magic methods) and some open questions: - Should MakeMock accept a string argument (and do the split for you)? - How should reset affect '_items'? It should probably take a copy of the items at instantiation and restore it when reset. - Should a failed indexing attempt still be added to 'method_calls'? (Currently not - it just raises the usual exception.) - Should attributes (children) on mocks created from MakeMock be plain 'Mock' or from the same class as their parent? (currently they are plain mocks) - Parent method calls if magic methods called on a child? (not currently possible as all children will be mocks rather than magic mocks) The first two of these I will probably decide one way or the other before a 0.5.0 release. The others I may just leave for a while and see how it gets used. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-11-05 15:13:20 | | Categories: Python, Projects, Tools Tags: testing, mocks, protocols, magic methods_3<< Here is Jonathan's rendering of the same diagram: Remote Pairing with Copilot and Skype Today I was due in at work, but engineering works between Northampton and London meant that there were no trains (which I didn't discover until I got to the station). I would have quite happily stayed at home and spiked, but Glenn was actually in the office and we couldn't think of enough for two of us to spike. At Resolver we practise Pair Programming, and all production code has to be paired on, so we decided to experiment with remote pairing. We used Skype for the audio, which was straightforward. We considered using a collaborative editor, like Gobby. We decided in the end to try screen sharing as it would be more flexible. Additionally, I didn't have a full build environment and copy of the subversion tree we were working on (and a collaborative editor would probably require me to have the files being edited). I discovered this Jon Udell blog entry on screen sharing tools, but in the end we decided to give Joel Spolsky's Copilot a try. It is based on VNC and is really easy to use. A day pass costs $5 [1] (I paid with Paypal) and then both 'sides' download a 736kb client and run it. That's it, no messing around with IP addresses or configuring routers and firewalls, just run the client and you are sharing a screen (the session id is encoded into the client you download - very clever). Both Resolver and I have a pretty good internet connection, and the screen sharing was very good. There was a bit of a lag, but less than the VNC clients I've used - even when the connection is only across an intranet [2]. It was a great experience. The audio connection was seamless and it really felt like 'pairing'. We worked together on the problem, and were both able to 'drive' (control the keyboard). We even completed the user story! As there is a Mac client (no Linux client I'm afraid) I could pair program Resolver without having to use Windows! Obviously a non-propietary solution would be even better than copilot, but I can't see one being as easy to use. I really enjoy working at Resolver, but I can't see myself doing the two hour commute (each way) indefinitely. If I could work from home this would be my dream job. On a totally different note, I've received my Moo Cards (and they're great) and an Eee PC with 8GB flash drive and 1GB memory [3]. I've also been watching The Muppet Show and The Cosby Show, both of which were childhood favourites and still hilarious - as you would know if you were following me on Twitter. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-27 22:55:07 | | Categories: General Programming, Work, Tools_8<<_9<< Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-20 15:11:03 | | Categories: Python, IronPython, Tools, Hacking Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_Tools.shtml
CC-MAIN-2016-44
refinedweb
2,548
63.29
If you’re serious about web development, then you’ll need to learn about JavaScript at some point. Year after year, numerous surveys have shown that JavaScript is one of the most popular programming languages in the world, with a large and growing community of developers. Just like Python, modern JavaScript can be used almost anywhere, including the front end, back end, desktop, mobile, and the Internet of Things (IoT). Sometimes it might not be an obvious choice between Python vs JavaScript. If you’ve never used JavaScript before or have felt overwhelmed by the quick pace of its evolution in recent years, then this article will set you on the right path. You should already know the basics of Python to benefit fully from the comparisons made between the two languages. In this article, you’ll learn how to: - Compare Python vs JavaScript - Choose the right language for the job - Write a shell script in JavaScript - Generate dynamic content on a web page - Take advantage of the JavaScript ecosystem - Avoid common pitfalls in JavaScript Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level. JavaScript at a Glance If you’re already familiar with the origins of JavaScript or just want to see the code in action, then feel free to jump ahead to the next section. Otherwise, prepare for a brief history lesson that will take you through the evolution of JavaScript. It’s Not Java! Many people, notably some IT recruiters, believe that JavaScript and Java are the same language. It’s hard to blame them, though, because inventing such a familiar-sounding name was a marketing trick. JavaScript was originally called Mocha before it was renamed to LiveScript and finally rebranded as JavaScript shortly before its release. At the time, Java was a promising web technology, but it was too difficult for nontechnical webmasters. JavaScript was intended as a somewhat similar but beginner-friendly language to supplement Java applets in web browsers. Fun Fact: Both Java and JavaScript were released in 1995. Python was already five years old. To add to the confusion, Microsoft developed its own version of the language, which it called JScript due to a lack of licensing rights, for use with Internet Explorer 3.0. Today, people often refer to JavaScript as JS. While Java and JavaScript share a few similarities in their C-like syntax as well as in their standard libraries, they’re used for different purposes. Java diverged from the client side into a more general-purpose language. JavaScript, despite its simplicity, was sufficient for validating HTML forms and adding little animations. It’s ECMAScript JavaScript was developed in the early days of the Web by a relatively small company known as Netscape. To win the market against Microsoft and mitigate the differences across web browsers, Netscape needed to standardize their language. After being turned down by the international World Wide Web Consortium (W3C), they asked a European standardization body called ECMA (today Ecma International) for help. ECMA defined a formal specification for the language called ECMAScript because the name JavaScript had been trademarked by Sun Microsystems. JavaScript became one of the implementations of the specification that it originally inspired. Note: In other words, JavaScript conforms to the ECMAScript specification. Another notable member of the ECMAScript family is ActionScript, which is used on the Flash platform. While individual implementations of the specification complied with ECMAScript to some extent, they also shipped with additional proprietary APIs. This led to web pages not displaying correctly across different browsers and the advent of libraries such as jQuery. Are There Other Scripts? To this day, JavaScript remains the only programming language natively supported by web browsers. It’s the lingua franca of the Web. Some people love it, while others don’t. There have been—and continue to be—many attempts to replace or supplant JavaScript with other technologies, including: - Rich Internet Applications: Flash, Silverlight, JavaFX - Transpilers: Haxe, Google Web Toolkit, pyjs - JavaScript dialects: CoffeeScript, TypeScript These attempts were driven not only by personal preference but also by web browsers’ limitations before HTML5 came onto the scene. In those days, you couldn’t use JavaScript for computationally intensive tasks such as drawing vector graphics or processing audio. Rich Internet Applications (RIA), on the other hand, offered an immersive desktop-like experience in the browser through plugins. They were great for games and processing media. Unfortunately, most of them were closed source. Some had security vulnerabilities or performance issues on certain platforms. To top it off, they all severely limited the ability of web search engines to index pages built with these plugins. Around the same time came transpilers, which allowed for an automated translation of other languages into JavaScript. This made the entry barrier to front-end development much lower because suddenly back-end engineers could leverage their skills in a new field. However, the downsides were slower development time, limited support for web standards, and cumbersome debugging of the transpiled JavaScript code. To link it back to the original code, you’d need a source map. Note: While a compiler translates human-readable code written in a high-level programming language straight into machine code, a transpiler translates one high-level language into another. That’s why transpilers are also known as source-to-source compilers. They’re not the same as cross compilers, though, which produce machine code for foreign hardware platforms. To write Python code for the browser, you can use one of the available transpilers, such as Transcrypt or pyjs. The latter is a port of Google Web Toolkit (GWT), which was a wildly popular Java-to-JavaScript transpiler. Another option is to use a tool like Brython, which runs a streamlined version of the Python interpreter in pure JavaScript. However, the benefits might be offset by poor performance and lack of compatibility. Transpiling allowed a ton of new languages to emerge with the intent of replacing JavaScript and addressing its shortcomings. Some of these languages were closely related dialects of JavaScript. Perhaps the first was CoffeeScript, which was created about a decade ago. One of the latest was Google’s Dart, which was the fastest-growing language in 2019 according to GitHub. Many more languages followed, but most of them are now obsolete due to the recent advances in JavaScript. One glaring exception is Microsoft’s TypeScript, which has gained much popularity in recent years. It’s a fully compatible superset of JavaScript that adds optional static type checking. If that sounds familiar to you, that’s because Python’s type hinting was inspired by TypeScript. While modern JavaScript is mature and actively developed, transpiling is still a common approach to ensure backward compatibility with older browsers. Even if you’re not using TypeScript, which seems to be the language of choice for many new projects, you’re still going to need to transpile your shiny new JavaScript into an older version of the language. Otherwise, you run the risk of getting a runtime error. Some transpilers also synthesize cutting-edge web APIs, which might be unavailable on certain browsers, with a so-called polyfill. Today, JavaScript can be thought of as the assembly language of the Web. Many professional front-end engineers tend not to write it by hand anymore. In such a case, it’s generated from scratch through transpiling. However, even handwritten code often gets processed in some way. For example, minification removes whitespace and renames variables to reduce the amount of data to transfer and to obfuscate the code so that it’s harder to reverse engineer. This is analogous to compiling the source code of a high-level programming language into native machine code. In addition to this, it’s worthwhile to mention that contemporary browsers support the WebAssembly standard, which is a fairly new technology. It defines a binary format for code that can run with almost-native performance in the browser. It’s fast, portable, secure, and allows for cross compilation of code written in languages like C++ or Rust. With it, for example, you could take the decades-old code of your favorite video game and run it in the browser. At the moment, WebAssembly helps you optimize the performance of computationally critical parts of your code, but it comes with a price tag. To begin with, you need to know one of the currently supported programming languages. You have to become familiar with low-level concepts such as memory management as there’s no garbage collector yet. The integration with JavaScript code is difficult and costly. Also, there’s no easy way to call web APIs from it. It seems that, after all these years, JavaScript isn’t going away anytime soon. JavaScript Starter Kit One of the first similarities you’ll notice when comparing Python vs JavaScript is that the entry barriers for both are pretty low, making both languages very attractive to beginners who’d like to learn to code. For JavaScript, the only starting requirement is having a web browser. If you’re reading this, then you’ve already got that covered. This accessibility contributes to the language’s popularity. The Address Bar To get a taste of what it’s like to write JavaScript code, you can stop reading now and type the following text into the address bar before navigating to it: The literal text is javascript:alert('hello world'), but don’t just copy and paste it! That part after the javascript: prefix is a piece of JavaScript code. When confirmed, it should make your browser display a dialog box with the hello world message in it. Each browser renders this dialog slightly differently. For example, Google Chrome displays it like this: Copying and pasting such a snippet into the address bar will fail in most browsers, which filter out the javascript: prefix as a safety measure against injecting malicious code. Some browsers, such as Mozilla Firefox, take it one step further by blocking this kind of code execution entirely. In any case, this isn’t the most convenient way of working with JavaScript because you’re constrained to only one line and limited to a certain number of characters. There’s a better way. Web Developer Tools If you’re viewing this page on a desktop or a laptop computer, then you can take advantage of the web developer tools, which provide comparable experience across competing web browsers. Note: The examples that follow use Google Chrome version 80.0. Keyboard shortcuts may vary for other browsers, but the interface should be largely the same. To toggle these tools, refer to your browser’s documentation or try one of these common keyboard shortcuts: F12 Ctrl+Shift+I Cmd+Option+I This feature may be disabled by default if you’re using Apple Safari or Microsoft Edge, for example. Once the web developer tools are activated, you’ll see a myriad of tabs and toolbars with content similar to this: Collectively, it’s a powerful development environment equipped with a JavaScript debugger, a performance and memory profiler, a network traffic manager, and much, much more. There’s even a remote debugger for physical devices connected over a USB cable! For the moment, however, just focus on the console, which you can access by clicking a tab located at the top. Alternatively, you can quickly bring it to the front by pressing Esc at any time while using the web developer tools. The console is primarily used for inspecting log messages emitted by the current web page, but it can also be a great JavaScript learning aid. Just like with the interactive Python interpreter, you can type JavaScript code directly into the console to have it executed on the fly: It has everything you’d expect from a typical REPL tool and more. In particular, the console comes with syntax highlighting, contextual autocomplete, command history, line editing similar to GNU Readline, and the ability to render interactive elements. Its rendering abilities can be especially useful for introspecting objects and tabular data, jumping to source code from a stack trace, or viewing HTML elements. You can log custom messages to the console using a predefined console object. JavaScript’s console.log() is the equivalent of Python’s print(): console.log('hello world'); This will make the message appear in the console tab in the web developer tools. Apart from that, there are a few more useful methods available in the console object. HTML Document By far the most natural place for the JavaScript code is somewhere near an HTML document, which it typically manipulates. You’ll learn more on that later. You can reference JavaScript from HTML in three different ways: You can have as many of these as you like. The first and second methods embed inline JavaScript directly within an HTML document. While this is convenient, you should try to keep imperative JavaScript separate from declarative HTML to promote readability. It’s more common to find one or more <script> tags referencing external files with JavaScript code. These files can be served by either a local or a remote web server. The <script> tag can appear anywhere in the document as long as it’s nested in either the <head> or the <body> tag: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Home Page</title> <script src=""></script> <script src="local/assets/app.js"></script> <script> function add(a, b) { return a + b; } </script> </head> <body> <p>Lorem ipsum dolor sit amet (...)</p> <script> console.log(add(2, 3)); </script> </body> </html> What’s important is how web browsers process HTML documents. A document is read top to bottom. Whenever a <script> tag is found, it gets immediately executed even before the page has been fully loaded. If your script tries to find HTML elements that haven’t been rendered yet, then you’ll get an error. To be safe, always put the <script> tags at the bottom of your document body: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Home Page</title> </head> <body> <p>Lorem ipsum dolor sit amet (...)</p> <script src=""></script> <script src="local/assets/app.js"></script> <script> function add(a, b) { return a + b; } </script> <script> console.log(add(2, 3)); </script> </body> </html> Not only will this protect you against the said error, but it will also improve the overall user experience. By moving those tags down, you’re allowing the user to see the fully rendered page before the JavaScript files start to download. You could also defer the download of external JavaScript files until the page has loaded: <script src="" defer></script> If you want to find out more about mixing JavaScript with HTML, then take a look at a JavaScript Tutorial by W3Schools. Node.js You don’t need a web browser to execute JavaScript code anymore. There’s a tool called Node.js that provides a runtime environment for server-side JavaScript. A runtime environment comprises the JavaScript engine, which is the language interpreter or compiler, as well as an API for interacting with the world. There are several alternative engines that come with different web browsers: Each of these is implemented and maintained by its vendor. For the end user, however, there’s no noticeable difference except for the performance of individual engines. Node.js uses the same V8 engine developed by Google for its Chrome browser. When running JavaScript inside a web browser, you typically want to be able to respond to mouse clicks, dynamically add HTML elements, or maybe get an image from the webcam. But that doesn’t make sense in a Node.js application, which runs outside of the browser. After you’ve installed Node.js for your platform, you can execute JavaScript code just like with the Python interpreter. To start an interactive session, go to your terminal and type node: $ node > 2 + 2 4 This is similar to the web developer console that you saw earlier. However, as soon as you try to refer to something browser related, you’ll get an error: > alert('hello world'); Thrown: ReferenceError: alert is not defined That’s because your runtime environment is missing the other component, which is the browser API. At the same time, Node.js provides a set of APIs that are useful in a back-end application, such as the file system API: > const fs = require('fs'); > fs.existsSync('/path/to/file'); false For safety reasons, you won’t find these APIs in the browser. Imagine allowing some random website to have control over the files on your computer! If the standard library doesn’t satisfy your needs, then you can always install a third-party package with the Node Package Manager ( npm) that comes with the Node.js environment. To browse or search for packages, go to the npm public registry, which is like the Python Package Index (PyPI). Similar to the python command, you can run scripts with Node.js: $ echo "console.log('hello world');" > hello.js $ node hello.js hello world By providing a path to a text file with the JavaScript code inside, you’re instructing Node.js to run that file instead of starting a new interactive session. On Unix-like systems, you can even indicate which program to run the file with using a shebang comment in the very first line of the file: #!/usr/bin/env node console.log('hello world'); The comment has to be a path to the Node.js executable. However, to avoid hard-coding an absolute path, which may differ across installations, it’s best to let the env tool figure out where Node.js is installed on your machine. Then you have to make the file executable before you can run it as if it were a Python script: $ chmod +x hello.js $ ./hello.js hello world The road to building full-blown web applications with Node.js is long and winding, but so is the path to writing Django or Flask applications in Python. Foreign Language Sometimes the runtime environment for JavaScript can be another programming language. This is typical of scripting languages in general. Python, for example, is widely used in plugin development. You’ll find it in the Sublime Text editor, GIMP, and Blender. To give you an example, you can evaluate JavaScript code in a Java program using the scripting API: package org.example; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class App { public static void main(String[] args) throws ScriptException { final ScriptEngineManager manager = new ScriptEngineManager(); final ScriptEngine engine = manager.getEngineByName("javascript"); System.out.println(engine.eval("2 + 2")); } } This is a Java extension, though it might not be available in your particular Java virtual machine. Subsequent Java generations bundle alternative scripting engines, such as Rhino, Nashorn, and GraalVM. Why is this useful? As long as the performance isn’t too bad, you could reuse the code of an existing JavaScript library instead of rewriting it in another language. Perhaps solving a problem, such as math expression evaluation, would be more convenient with JavaScript than your native language. Finally, using a scripting language for behavior customization at runtime, like data filtering or validation, could be the only way to go in a compiled language. JavaScript vs Python In this section, you’ll compare Python vs JavaScript from a Pythonista’s perspective. There will be some new concepts ahead, but you’ll also discover a few similarities between the two languages. Use Cases Python is a general-purpose, multi-paradigm, high-level, cross-platform, interpreted programming language with a rich standard library and an approachable syntax. As such, it’s used across a wide range of disciplines, including computer science education, scripting and automation, prototyping, software testing, web development, programming embedded devices, and scientific computing. Although it’s doable, you probably wouldn’t choose Python as the primary technology for video game or mobile app development. JavaScript, on the other hand, originated solely as a client-side scripting language for making HTML documents a little more interactive. It’s intentionally simple and has a singular focus: adding behavior to user interfaces. This is still true today despite its improved capabilities. With Javascript, you can build not only web applications but also desktop programs and mobile apps. Tailor-made runtime environments let you execute JavaScript on the server or even on IoT devices. Philosophy Python emphasizes code readability and maintainability at the price of its expressiveness. After all, you can’t even format your code too much without breaking it. You also won’t find esoteric operators like you would in C++ or Perl since most of the Python operators are English words. Some people joke that Python is executable pseudocode thanks to its straightforward syntax. As you’ll find out later, JavaScript offers much more flexibility but also more ways to cause trouble. For example, there’s no one right way of creating custom data types in JavaScript. Besides, the language needs to remain backward compatible with older browsers even when new syntax fixes a problem. Versions Up until recently, you would find two largely incompatible versions of Python available for download on its official website. This divide between Python 2.7 and Python 3.x was confusing to beginners and was a major factor in slowing down the adoption of the latest development branch. In January 2020, after years of delaying the deadline, the support for Python 2.7 was finally dropped. However, despite the looming lack of security updates and warnings issued by some government agencies, there are still a lot of projects that haven’t migrated yet: Brendan Eich created JavaScript in 1995, but the ECMAScript we know today was standardized two years later. Since then, there have been only a handful of releases, which looks stagnant compared to the multiple new versions of Python released each year during the same period. Notice the gap between ES3 and ES5, which lasted an entire decade! Due to political conflicts and disagreements in the technical committee, ES4 never made its way to web browsers, but it was used by Macromedia (later Adobe) as a base for ActionScript. The first major overhaul to JavaScript came in 2015 with the introduction of ES6, also known as ES2015 or ECMAScript Harmony. It brought a lot of new syntactical constructs, which made the language more mature, safe, and convenient for the programmer. It also marked a turning point in the ECMAScript release schedule, which now promises a new version every year. Such a fast pace means that you can’t assume the latest language version has been adopted by all major web browsers since it takes time to roll out updates. That’s why transpiling and polyfills prevail. Today, pretty much any modern web browser can support ES5, which is the default target for the transpilers. Runtime To run a Python program, you first need to download, install, and possibly configure its interpreter for your platform. Some operating systems provide an interpreter out of the box, but it may not be the version that you’re looking to use. There are alternative Python implementations, including CPython, PyPy, Jython, IronPython, or Stackless Python. You can also choose from multiple Python distributions, such as Anaconda, that come with preinstalled third-party packages. JavaScript is different. There’s no stand-alone program to download. Instead, every major web browser ships with some kind of JavaScript engine and an API, which together make the runtime environment. In the previous section, you learned about Node.js, which allows for running JavaScript code outside of the browser. You also know about the possibility to embed JavaScript in other programming languages. Ecosystem A language ecosystem consists of its runtime environment, frameworks, libraries, tools, and dialects as well as its best practices and unwritten rules. Which combination you choose will depend on your particular use case. In the old days, you didn’t need much more than a good code editor to write JavaScript. You’d download a few libraries like jQuery, Underscore.js, or Backbone.js, or rely on a Content Delivery Network (CDN) to provide them for your clients. Today, the number of questions you need to answer and the tools you need to acquire to start building even the simplest website can be daunting. The build process for a front-end app is as complicated as it is for a back-end app, if not more so. Your web project goes through linting, transpilation, polyfilling, bundling, minification, and more. Heck, even the CSS style sheets are no longer sufficient and need to be compiled from an extension language by a preprocessor such as Sass or Less. To alleviate that, some frameworks offer utilities that set up the default project structure, generate configuration files, and download dependencies for you. As an example, you can create a new React app with this short command, provided that you already have the latest Node.js on your computer: $ npx create-react-app todo At the time of writing, this command took several minutes to finish and installed a whopping 166 MB in 1,815 packages! Compare this to starting a Django project in Python, which is instantaneous: $ django-admin startproject blog The modern JavaScript ecosystem is enormous and keeps evolving, which makes it impossible to give a thorough overview of its elements. You’ll encounter plenty of foreign tools as you’re learning JavaScript. However, the concepts behind some of them will sound familiar. Here’s how you can map them back to Python: This list isn’t exhaustive. Besides, some of the tools mentioned above have overlapping capabilities, so it’s hard to make an apples-to-apples comparison in each category. Sometimes there isn’t a direct analogy between Python vs JavaScript. For example, while you may be used to creating isolated virtual environments for your Python projects, Node.js handles that out of the box by installing dependencies into a local folder. Conversely, JavaScript projects may require additional tools that are unique to front-end development. One such tool is Babel, which transpiles your code according to various plugins grouped into presets. It can handle experimental ECMAScript features as well as TypeScript and even React’s JSX extension syntax. Another category of tool is the module bundler, whose role is to consolidate multiple independent source files into one that can be easily consumed by a web browser. During development, you want to break down your code into reusable, testable, and self-contained modules. That’s reasonable for an experienced Python programmer. Unfortunately, JavaScript didn’t originally come with support for modularity. You still need to use a separate tool for that, although this requirement is changing. Popular choices for module bundlers are webpack, Parcel, and Browserify, which can also handle static assets. Then you have build automation tools such as Grunt and gulp. They are vaguely similar to Fabric and Ansible in Python, although they’re used locally. These tools automate boring tasks such as copying files or running the transpiler. In a large-scale single-page application (SPA) with a lot of interactive UI elements, you may need a specialized library such as Redux or MobX for state management. These libraries aren’t tied to any particular front-end framework but can be quickly hooked up. As you can see, learning the JavaScript ecosystem is an endless journey. Memory Model Both languages take advantage of automatic heap memory management to eliminate human error and to reduce cognitive load. Nevertheless, this doesn’t completely free you from the risk of getting a memory leak, and it adds some performance overhead. Note: A memory leak occurs when a piece of memory that is no longer needed remains unnecessarily occupied and there is no way to deallocate it since it’s no longer reachable from your code. A common source of memory leaks in JavaScript are global variables and closures that hold strong references to defunct objects. The orthodox CPython implementation uses reference counting as well as non-deterministic garbage collection (GC) to deal with reference cycles. Occasionally, you may be forced to manually allocate and reclaim the memory when you venture into writing a custom C extension module. In JavaScript, the actual implementation of memory management is also left to your particular engine and version since it’s not a part of the language specification. The basic strategy for garbage collection is usually the mark-and-sweep algorithm, but various optimization techniques exist. For example, the heap can be organized into generations that separate short-lived objects from long-lived ones. Garbage collection can run concurrently to offload the main thread of execution. Taking an incremental approach can help avoid bringing the program to a complete stop while the memory is cleaned up. JavaScript Type System You must be itching to learn about the JavaScript syntax, but first let’s take a quick look at its type system. It’s one of the most important components that define any programming language. Type Checking Both Python and JavaScript are dynamically typed because they check types at runtime, when the application is executing, rather than at compile time. It’s convenient because you aren’t forced to declare a variable’s type such as int or str: >>> data = 42 >>> data = 'This is a string' Here, you reuse the same variable name for two different kinds of entities that have distinct representations in computer memory. First it’s an integer number, and then it’s a piece of text. Note: It’s worth noting that some statically typed languages, such as Scala, also don’t require an explicit type declaration as long as it can be inferred from the context. Dynamic typing is often misunderstood as not having any types whatsoever. This is coming from languages in which a variable works like a box that can only fit a certain type of object. In both Python and JavaScript, the type information is tied not to the variable but to the object it points to. Such a variable is merely an alias, a label, or a pointer to some object in memory. A lack of type declarations is great for prototyping, but in larger projects it quickly becomes a bottleneck from the maintenance point of view. Dynamic typing is less secure due to a higher risk of bugs going undetected inside of infrequently exercised code execution paths. Moreover, it makes reasoning about the code much more difficult both for humans and for code editors. Python addressed this problem by introducing type hinting, which you can sprinkle variables with: data: str = 'This is a string' By default, type hints provide only informative value since the Python interpreter doesn’t care about them at runtime. However, you can add a separate utility, such as a static type checker, to your tool chain to get an early warning about mismatched types. The type hints are completely optional, which makes it possible to combine dynamically typed code with statically typed code. This approach is known as gradual typing. The idea of gradual typing was borrowed from TypeScript, which is essentially JavaScript with types that you can transpile back to plain old JavaScript. Another common feature of both languages is the use of duck typing for testing type compatibility. However, an area where Python vs JavaScript are significantly different is the strength of their type-checking mechanisms. Python demonstrates strong typing by refusing to act upon objects with incompatible types. For example, you can use the plus ( +) operator to add numbers or to concatenate strings, but you can’t mix the two: >>> '3' + 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only concatenate str (not "int") to str The interpreter won’t implicitly promote one type to another. You have to decide for yourself and make a suitable type casting manually. If you wanted an algebraic sum, then you’d do this: >>> int('3') + 2 5 To join the two strings together, you’d cast the second operand accordingly: >>> '3' + str(2) >>> '32' JavaScript, on the other hand, uses weak typing, which automatically coerces types according to a set of rules. Unfortunately, these rules are inconsistent and hard to remember as they depend on operator precedence. Taking the same example as before, JavaScript will implicitly convert numbers to strings when you use the plus ( +) operator: > '3' + 2 '32' That’s great as long as it’s the desired behavior. Otherwise, you’ll be pulling your hair out trying to find the root cause of a logical error. But it gets even more funky than that. Let’s see what happens if you change the operator to something else: > '3' - 2 1 Now it’s the other operand that gets converted to a number so the end result isn’t a string. As you can see, weak typing can be quite surprising. The strength of type checking isn’t just black and white. Python lies somewhere in the middle of this spectrum. For instance, it’ll happily add an integer to a floating-point number, whereas the Swift programming language would raise an error in such a situation. Note: Strong vs weak typing is independent from static vs dynamic typing. For instance, the C programming language is statically and weakly typed at the same time. To recap, JavaScript is dynamically as well as weakly typed and supports duck typing. JavaScript Types In Python, everything is an object, whereas JavaScript makes a distinction between primitive and reference types. They differ in a couple of ways. First, there are only a few predefined primitive types that you need to care about because you can’t make your own. The majority of built-in data types that come with JavaScript are reference types. These are the only primitive types available in JavaScript: boolean null number string symbol(since ES6) undefined On the other hand, here are a handful of reference types that come with JavaScript off the shelf: Array Boolean Date Map Number Object RegExp Set String Symbol - (…) There’s also a proposal to include a new BigInt numeric type, which some browsers already support, in ES11. Other than that, any custom data types that you might define are going to be reference types. Variables of primitive types are stored in a special memory area called the stack, which is fast but has a limited size and is short-lived. Conversely, objects with reference types are allocated on the heap, which is only restricted by the amount of physical memory available on your computer. Such objects have a much longer life cycle but are slightly slower to access. Primitive types are bare values without any attributes or methods to call. However, as soon as you try to access one using dot notation, the JavaScript engine will instantly wrap a primitive value in the corresponding wrapper object: > 'Lorem ipsum'.length 11 Even though a string literal in JavaScript is a primitive data type, you can check its .length attribute. What happens under the hood is that your code is replaced with a call to the String object’s constructor: > new String('Lorem ipsum').length 11 A constructor is a special function that creates a new instance of a given type. You can see that the .length attribute is defined by the String object. This wrapping mechanism is known as autoboxing and was copied directly from the Java programming language. The other and more tangible difference between primitive and reference types is how they’re passed around. Specifically, whenever you assign or pass a value of a primitive type, you actually create a copy of that value in memory. Here’s an example: > x = 42 > y = x > x++ // This is short for x += 1 > console.log(x, y) 43 42 The assignment y = x creates a new value in memory. Now you have two distinct copies of the number 42 referenced by x and y, so incrementing one doesn’t affect the other. However, when you pass a reference to an object literal, then both variables point to the same entity in memory: > x = {name: 'Person1'} > y = x > x. console.log(y) {name: 'Person2'} Object is a reference type in JavaScript. Here, you’ve got two variables, x and y, referring to the same instance of a Person object. The change made to one of the variables is reflected in the other variable. Last but not least, primitive types are immutable, which means that you can’t change their state once they are initialized. Every modification, such as incrementing a number or making text uppercase, results in a brand-new copy of the original value. While this is a bit wasteful, there are plenty of good reasons to use immutable values, including thread safety, simpler design, and consistent state management. Note: To be fair, this is almost identical to how Python deals with passing objects despite its lack of primitive types. Mutable types such as list and dict don’t create copies, whereas immutable types such as int and str do. To check if a variable is a primitive type or a reference type in JavaScript, you can use the built-in typeof operator: > typeof 'Lorem ipsum' 'string' > typeof new String('Lorem ipsum') 'object' For reference types, the typeof operator always returns a generic "object" string. Note: Always use the typeof operator to check if a variable is undefined. Otherwise, you may find yourself in trouble: > typeof noSuchVariable === 'undefined' true > noSuchVariable === undefined ReferenceError: noSuchVariable is not defined Comparing a non-existing variable to any value will throw an exception! If you want to obtain a more detailed information about a particular type, then you have a couple of options: > today = new Date() > today.constructor.name 'Date' > today instanceof Date true > Date.prototype.isPrototypeOf(today) true You can try checking an object’s constructor name using the instanceof operator, or you can test if it’s derived from a particular parent type with the .prototype property. Type Hierarchy Python and JavaScript are object-oriented programming languages. They both allow you to express code in terms of objects that encapsulate identity, state, and behavior. While most programming languages, including Python, use class-based inheritance, JavaScript is one of a few that don’t. Note: A class is a template for objects. You can think of classes like cookie cutters or object factories. To create hierarchies of custom types in JavaScript, you need to become familiar with prototypal inheritance. That is often one the most challenging concepts to understand when you make a switch from a more classical inheritance model. If you have twenty minutes, then you can watch a great video on prototypes that clearly explains the concept. Note: Contrary to Python, multiple inheritance isn’t possible in JavaScript because any given object can have only one prototype. That said, you can use proxy objects, which were introduced in ES6, to mitigate that. The gist of the story is that there are no classes in JavaScript. Well, technically, you can use the class keyword that was introduced in ES6, but it’s purely a syntactic sugar to make things easier for newcomers. Prototypes are still used behind the scenes, so it’s worthwhile to get a closer look at them, which you’ll have a chance to do later on. Function Type Lastly, functions are an interesting part of the JavaScript and Python type systems. In both languages, they’re often referred to as first-class citizens or first-class objects because the interpreter doesn’t treat them any differently than other data types. You can pass a function as an argument, return it from another function, or store it in a variable just like a regular value. This is a very powerful feature that allows you to define higher-order functions and to take full advantage of the functional paradigm. For languages in which functions are special entities, you can work around this with the help of design patterns such as the strategy pattern. JavaScript is even more flexible than Python in regard to functions. You can define an anonymous function expression full of statements with side effects, whereas Python’s lambda function must contain exactly one expression and no statements: let countdown = 5; const id = setInterval(function() { if (countdown > 0) { console.log(`${countdown--}...`); } else if (countdown === 0) { console.log('Go!'); clearInterval(id); } }, 1000); The built-in setInterval() lets you execute a given function periodically in time intervals expressed in milliseconds until you call clearInterval() with the corresponding ID. Notice the use of a conditional statement and the mutation of a variable from the outer scope of the function expression. JavaScript Syntax JavaScript and Python are both high-level scripting languages that share a fair bit of syntactical similarities. This is especially true of their latest versions. That said, JavaScript was designed to resemble Java, whereas Python was modeled after the ABC and Modula-3 languages. Code Blocks One of the hallmarks of Python is the use of mandatory indentation to denote a block of code, which is quite unusual and frowned upon by new Python converts. Many popular programming languages, including JavaScript, use curly brackets or special keywords instead: function fib(n) { if (n > 1) { return fib(n-2) + fib(n-1); } return 1; } In JavaScript, every block of code consisting of more than one line needs an opening { and a closing }, which gives you the freedom to format your code however you like. You can mix tabs with spaces and don’t need to pay attention to your bracket placement. Unfortunately, this can result in messy code and sectarian conflicts between developers with different style preferences. This makes code reviews problematic. Therefore, you should always establish coding standards for your team and use them consistently, preferably in an automated way. Note: You could simplify the function body above by taking advantage of the ternary if ( ?:), which is sometimes called the Elvis operator because it looks like the hairstyle of the famous singer: return (n > 1) ? fib(n-2) + fib(n-1) : 1; This is equivalent to a conditional expression in Python. Speaking of indentation, it’s customary for JavaScript code to be formatted using two spaces per indentation level instead of the recommended four in Python. Statements To reduce friction for those making a switch from Java or another C-family programming language, JavaScript terminates statements with a familiar semicolon ( ;). If you’ve ever programmed in one of those languages, then you’ll know that putting a semicolon after an instruction becomes muscle memory: alert('hello world'); Semicolons aren’t required in JavaScript, though, because the interpreter will take a guess and insert one for you automatically. In most cases, it’ll be right, but sometimes it may lead to peculiar results. Note: You can use semicolons in Python, too! Although they’re not very popular, they help isolate multiple statements on a single line: import pdb; pdb.set_trace() People have strong opinions on whether to use the semicolon explicitly or not. While there are a few corner cases in which it matters, it’s largely just a convention. Identifiers Identifiers, such as variable or function names, must be alphanumeric in JavaScript and Python. In other words, they can only contain letters, digits, and a few special characters. At the same time, they can’t start with a digit. While non-Latin characters are allowed, you should generally avoid them: - Legal: foo, foo42, _foo, $foo, fößar - Illegal: 42foo Names in both languages are case sensitive, so variables like foo and Foo are distinct. Nonetheless, the naming conventions in JavaScript are slightly different than in Python: In general, Python recommends using lower_case_with_underscores, also known as snake_case, for compound names, so that individual words get separated with an underscore character ( _). The only exception to that rule is classes, whose names should follow the CapitalizedWords, or Pascal case, style. JavaScript also uses CapitalizedWords for types but mixedCase, or lower camelCase, for everything else. String Literals To define string literals in JavaScript, you can use a pair of single quotes ( ') or double quotes ( ") interchangeably, just like in Python. However, for a long time, there was no way to define multiline strings in JavaScript. Only ES6 in 2015 brought template literals, which look like a hybrid of f-strings and multiline strings borrowed from Python: var name = 'John Doe'; var message = `Hi ${name.split(' ')[0]}, We're writing to you regarding... Kind regards, Xyz `; A template starts with a backtick (`), also known as the grave accent, instead of regular quotes. To interpolate a variable or any legal expression, you have to use the dollar sign followed by a pair of matching curly brackets: ${...}. This is different from Python’s f-strings, which don’t require the dollar sign. Variable Scopes When you define a variable in JavaScript the same way that you would normally do in Python, you’re implicitly creating a global variable. Since global variables break encapsulation, you should rarely need them! The correct way to declare variables in JavaScript has always been through the var keyword: x = 42; // This is a global variable. Did you really mean that? var y = 15; // This is global only when declared in a global context. Unfortunately, this doesn’t declare a truly local variable, and it has its own problems that you’ll find out about in the upcoming section. Since ES6, there’s been a better way to declare variables and constants with the let and const keywords, respectively: > let name = 'John Doe'; > const PI = 3.14; > PI = 3.1415; TypeError: Assignment to constant variable. Unlike constants, variables in JavaScript don’t need an initial value. You can provide one later: let name; name = 'John Doe'; When you leave off the initial value, you create what’s called a variable declaration rather than a variable definition. Such variables automatically receive a special value of undefined, which is one of the primitive types in JavaScript. This is different in Python, where you always define variables except for variable annotations. But even then, these variables aren’t technically declared: name: str name = 'John Doe' Such an annotation doesn’t affect the variable life cycle. If you referred to name before the assignment, then you’d receive a NameError exception. Switch Statements If you’ve been complaining about Python not having a proper switch statement, then you’ll be happy to learn that JavaScript does: // As with C, clauses will fall through unless you break out of them. switch (expression) { case 'kilo': value = bytes / 2**10; break; case 'mega': value = bytes / 2**20; break; case 'giga': value = bytes / 2**30; break; default: console.log(`Unknown unit: "${expression}"`); } The expression can evaluate to any type, including a string, which wasn’t always the case in the older Java versions that influenced JavaScript. By the way, did you notice the familiar exponentiation operator ( **) in the code snippet above? It wasn’t available in JavaScript until ES7 in 2016. Enumerations There’s no native enumeration type in pure JavaScript, but you can use the enum type in TypeScript or emulate one with something similar to this: const Sauce = Object.freeze({ BBQ: Symbol('bbq'), CHILI: Symbol('chili'), GARLIC: Symbol('garlic'), KETCHUP: Symbol('ketchup'), MUSTARD: Symbol('mustard') }); Freezing an object prevents you from adding or removing its attributes. This is different from a constant, which can be mutable! A constant will always point to the same object, but the object itself might change its value: > const fruits = ['apple', 'banana']; > fruits.push('orange'); // ['apple', 'banana', 'orange'] > fruits = []; TypeError: Assignment to constant variable. You can add an orange to the array, which is mutable, but you can’t modify the constant that is pointing to it. Arrow Functions Until ES6, you could only define a function or an anonymous function expression using the function keyword: function add(a, b) { return a + b; } let add = function(a, b) { return a + b; }; However, to reduce the boilerplate code and to fix a slight problem with binding functions to objects, you can now use the arrow function in addition to the regular syntax: let add = (a, b) => a + b; Notice that there’s no function keyword anymore, and the return statement is implicit. The arrow symbol ( =>) separates the function’s arguments from its body. People sometimes call it the fat arrow function because it was originally borrowed from CoffeeScript, which also has a thin arrow ( ->) counterpart. Arrow functions are most suitable for small, anonymous expressions like lambdas in Python, but they can contain multiple statements with side effects if needed: let add = (a, b) => { const result = a + b; return result; } When you want to return an object literal from an arrow function, you need to wrap it in parentheses to avoid ambiguity with a block of code: let add = (a, b) => ({ result: a + b }); Otherwise, the function body would be confused for a block of code without any return statements, and the colon would create a labeled statement rather than a key-value pair. Default Arguments Starting with ES6, function arguments can have default values like in Python: > function greet(name = 'John') { … console.log('Hello', name); … } > greet(); Hello John Unlike Python, however, the default values are resolved every time the function is called instead of only when it’s defined. This makes it possible to safely use mutable types as well as to dynamically refer to other arguments passed at runtime: > function foo(a, b=a+1, c=[]) { … c.push(a); … c.push(b); … console.log(c); … } > foo(1); [1, 2] > foo(5); [5, 6] Every time you call foo(), its default arguments are derived from the actual values passed to the function. Variadic Functions When you want to declare a function with variable number of parameters in Python, you take advantage of the special *args syntax. The JavaScript equivalent would be the rest parameter defined with the spread ( ...) operator: > function average(...numbers) { … if (numbers.length > 0) { … const sum = numbers.reduce((a, x) => a + x); … return sum / numbers.length; … } … return 0; … } > average(); 0 > average(1); 1 > average(1, 2); 1.5 > average(1, 2, 3); 2 The spread operator can also be used to combine iterable sequences. For example, you can extract the elements of one array into another: const redFruits = ['apple', 'cherry']; const fruits = ['banana', ...redFruits]; Depending on where you place the spread operator in the target list, you may prepend or append elements or insert them somewhere in the middle. Destructuring Assignments To unpack an iterable into individual variables or constants, you can use the destructuring assignment: > const fruits = ['apple', 'banana', 'orange']; > const [a, b, c] = fruits; > console.log(b); banana Similarly, you can destructure and even rename object attributes: const person = {name: 'John Doe', age: 42, married: true}; const {name: fullName, age} = person; console.log(`${fullName} is ${age} years old.`); This helps avoid name collisions for variables defined within one scope. with Statements There’s an alternative way to drill down to an object’s attributes using the slightly old with statement: const person = {name: 'John Doe', age: 42, married: true}; with (person) { console.log(`${name} is ${age} years old.`); } It works like a construct in Object Pascal, in which a local scope gets temporarily augmented with attributes of the given object. Note: The with statements in Python vs JavaScript are false friends. In Python, you use a with statement to manage resources through context managers. Since this might be obscure, the with statement is generally discouraged and is even unavailable in strict mode. Iterables, Iterators, and Generators Since ES6, JavaScript has had the iterable and iterator protocols as well as generator functions, which look almost identical to Python’s iterables, iterators, and generators. To turn a regular function into a generator function, you need to add an asterisk ( *) after the function keyword: function* makeGenerator() {} You can’t make generator functions out of arrow functions, though. When you call a generator function, it won’t execute the body of that function. Instead, it returns a suspended generator object that conforms to the iterator protocol. To advance your generator, you can call .next(), which is similar to Python’s built-in > const generator = makeGenerator(); > const {value, done} = generator.next(); > console.log(value); undefined > console.log(done); true As a result, you’ll always get a status object with two attributes: the subsequent value and a flag that indicates if the generator has been exhausted. Python throws the StopIteration exception when there are no more values in the generator. To return some value from your generator function, you can use either the yield keyword or the return keyword. The generator will keep feeding values until there are no more yield statements, or until you return prematurely: let shouldStopImmediately = false; function* randomNumberGenerator(maxTries=3) { let tries = 0; while (tries++ < maxTries) { if (shouldStopImmediately) { return 42; // The value is optional } yield Math.random(); } } The above generator will keep yielding random numbers until it reaches the maximum number of tries or you set a flag to make it terminate early. The equivalent of the yield from expression in Python, which delegates the iteration to another iterator or an iterable object, is the yield* expression: > function* makeGenerator() { … yield 1; … yield* [2, 3, 4]; … yield 5; … } > const generator = makeGenerator() > generator.next(); {value: 1, done: false} > generator.next(); {value: 2, done: false} > generator.next(); {value: 3, done: false} > generator.next(); {value: 4, done: false} > generator.next(); {value: 5, done: false} > generator.next(); {value: undefined, done: true} Interestingly, it’s legal to return and yield at the same time: function* makeGenerator() { return yield 42; } However, due to a grammar limitation, you’d have to use parentheses to achieve the same effect in Python: def make_generator(): return (yield 42) To explain what’s going on, you can rewrite that example by introducing a helper constant: function* makeGenerator() { const message = yield 42; return message; } If you know coroutines in Python, then you’ll remember that generator objects can be both producers and consumers. You can send arbitrary values into a suspended generator by providing an optional argument to > function* makeGenerator() { … const message = yield 'ping'; … return message; … } > const generator = makeGenerator(); > generator.next(); {value: "ping", done: false} > generator.next('pong'); {value: "pong", done: true} The first call to .next() runs the generator until the first yield expression, which happens to return "ping". The second call passes a "pong" that is stored in the constant and immediately returned from the generator. Asynchronous Functions The nifty mechanism explored above was the basis for asynchronous programming and the adoption of the async and await keywords in Python. JavaScript followed the same path by bringing in asynchronous functions with ES8 in 2017. While a generator function returns a special kind of iterator, the generator object, asynchronous functions always return a promise, which was first introduced in ES6. A promise represents the future result of an asynchronous call such as fetch() from the Fetch API. When you return any value from an asynchronous function, it’s automatically wrapped in a promise object that can be awaited in another asynchronous function: async function greet(name) { return `Hello ${name}`; } async function main() { const promise = greet('John'); const greeting = await promise; console.log(greeting); // "Hello John" } main(); Typically, you’d await and assign the result in one go: const greeting = await greet('John'); Although you can’t completely get rid of promises with asynchronous functions, they significantly improve your code readability. It starts to look like synchronous code even though your functions can be paused and resumed multiple times. One notable difference from the asynchronous code in Python is that, in JavaScript, you don’t need to manually set up the event loop, which runs in the background implicitly. JavaScript is inherently asynchronous. Objects and Constructors You know from an earlier part of this article that JavaScript doesn’t have a concept of classes. Instead, it knows about objects. You can create new objects using object literals, which look like Python dictionaries: let person = { name: 'John Doe', age: 42, married: true }; It behaves like a dictionary in that you can access individual attributes using dot syntax or square brackets: > person.age++; > person['age']; 43 Object attributes don’t need to be enclosed in quotes unless they contain spaces, but that isn’t a common practice: > let person = { … 'full name': 'John Doe' … }; > person['full name']; 'John Doe' > person.full name; SyntaxError: Unexpected identifier Just like a dictionary and some objects in Python, objects in JavaScript have dynamic attributes. That means you can add new attributes or delete existing ones from an object: > let person = {name: 'John Doe'}; > person.age = 42; > console.log(person); {name: "John Doe", age: 42} > delete person.name; true > console.log(person); {age: 42} Starting from ES6, objects can have attributes with computed names: > let person = { … ['full' + 'Name']: 'John Doe' … }; > person.fullName; 'John Doe' Python dictionaries and JavaScript objects are allowed to contain functions as their keys and attributes. There are ways to bind such functions to their owner so that they behave like class methods. For example, you can use a circular reference: > let person = { … name: 'John Doe', … sayHi: function() { … console.log(`Hi, my name is ${person.name}.`); … } … }; > person.sayHi(); Hi, my name is John Doe. sayHi() is tightly coupled to the object it belongs to because it refers to the person variable by name. If you were to rename that variable at some point, then you’d have to go through the whole object and make sure to update all occurrences of that variable name. A slightly better approach takes advantage of the implicit this variable that is exposed to functions. The value of this can be different depending on who’s calling the function: > let jdoe = { … name: 'John Doe', … sayHi: function() { … console.log(`Hi, my name is ${this.name}.`); … } … }; > jdoe.sayHi(); Hi, my name is John Doe. After replacing a hard-coded person with this, which is similar to Python’s self, it won’t matter what the variable name is, and the result will be the same as before. Note: The example above won’t work if you replace the function expression with an arrow function, because the latter has different scoping rules for the this variable. That’s great, but as soon as you decide to introduce more objects of the same Person kind, you’ll have to repeat all attributes and redefine all functions in each object. What you’d rather have is a template for Person objects. The canonical way of creating custom data types in JavaScript is to define a constructor, which is an ordinary function: function Person() { console.log('Calling the constructor'); } As a convention, to denote that such a function has a special meaning, you’d capitalize the first letter to follow CapitalizedWords instead of the usual mixedCase. On the syntactical level, however, it’s just a function that you can call normally: > Person(); Calling the constructor undefined What makes it special is how you call it: > new Person(); Calling the constructor Person {} When you add the new keyword in front of the function call, it’ll implicitly return a brand-new instance of a JavaScript object. That means your constructor shouldn’t contain the return statement. While the interpreter is responsible for allocating memory for and scaffolding a new object, the role of a constructor is to give the object an initial state. You can use the previously mentioned this keyword to refer to a new instance under construction: function Person(name) { this.name = name; this.sayHi = function() { console.log(`Hi, my name is ${this.name}.`); } } Now you can create multiple distinct Person entities: const jdoe = new Person('John Doe'); const jsmith = new Person('John Smith'); Alright, but you’re still duplicating function definitions across all instances of the Person type. The constructor is just a factory that hooks the same values to individual objects. It’s wasteful and could lead to inconsistent behavior if you were to change it at some point. Consider this: > const jdoe = new Person('John Doe'); > const jsmith = new Person('John Smith'); > jsmith.sayHi = _ => console.log('What?'); > jdoe.sayHi(); Hi, my name is John Doe. > jsmith.sayHi(); What? Since every object gets a copy of its attributes, including functions, you must carefully update all instances to keep a uniform behavior. Otherwise, they’ll do different things, which typically isn’t what you want. Objects might have a different state, but their behavior usually won’t change. Prototypes As a rule of thumb, you should move the business logic from the constructor, which is concerned about data, to the prototype object: function Person(name) { this.name = name; } Person.prototype.sayHi = function() { console.log(`Hi, my name is ${this.name}.`); }; Every object has a prototype. You can access your custom data type’s prototype by referring to the .prototype attribute of your constructor. It’ll already have a few predefined attributes, such as .toString(), that are common to all objects in JavaScript. You can add more attributes with your custom methods and values. When JavaScript is looking for an object’s attribute, it begins by trying to find it in that object. Upon failure, it moves on to the respective prototype. Therefore, attributes defined in a prototype are shared across all instances of the corresponding type. Prototypes are chained, so the attribute lookup continues until there are no more prototypes in the chain. This is analogous to type hierarchy through inheritance. Not only can you create methods in one place thanks to the prototypes, but you can also create static attributes by attaching them to one: > Person.prototype.PI = 3.14; > new Person('John Doe').PI; 3.14 > new Person('John Smith').PI; 3.14 To illustrate the power of prototypes, you may try to extend the behavior of existing objects, or even a built-in data type. Let’s add a new method to the string type in JavaScript by specifying it in the prototype object: String.prototype.toSnakeCase = function() { return this.replace(/\s+/g, '') .split(/(?<=[a-z])(?=[A-Z])/g) .map(x => x.toLowerCase()) .join('_'); }; It uses regular expressions to transform the text into snake_case. Suddenly, string variables, constants, and even string literals can benefit from it: > "loremIpsumDolorSit".toSnakeCase(); 'lorem_ipsum_dolor_sit' However, this is a double-edged sword. In a similar way, someone could override one of the existing methods in a prototype of a popular type, which would break the assumptions made elsewhere. Such monkey patching can be useful in testing but is otherwise very dangerous. Classes Since ES6, there’s been an alternative way to define prototypes that uses a much more familiar syntax: class Person { constructor(name) { this.name = name; } sayHi() { console.log(`Hi, my name is ${this.name}.`); } } Even though this looks like you’re defining a class, it’s only a convenient high-level metaphor for specifying custom data types in JavaScript. Behind the scenes, there are no real classes! For that reason, some people go so far as to advocate against using this new syntax at all. You can have getters and setters in your class, which are similar to the Python’s class properties: > class Square { … constructor(size) { … this.size = size; // Triggers the setter … } … set size(value) { … this._size = value; // Sets the private field … } … get area() { … return this._size**2; … } … } > const box = new Square(3); > console.log(box.area); 9 > box.size = 5; > console.log(box.area); 25 When you omit the setter, you create a read-only property. That’s misleading, however, because you can still access the underlying private field like you can in Python. A common pattern to encapsulate internal implementation in JavaScript is an Immediately Invoked Function Expression (IIFE), which can look like this: > const odometer = (function(initial) { … let mileage = initial; … return { … get: function() { return mileage; }, … put: function(miles) { mileage += miles; } … }; … })(33000); > odometer.put(65); > odometer.put(12); > odometer.get(); 33077 In other words, it’s an anonymous function that calls itself. You can use the newer arrow function to create an IIFE too: const odometer = ((initial) => { let mileage = initial; return { get: _ => mileage, put: (miles) => mileage += miles }; })(33000); This is how JavaScript historically emulated modules to avoid name collisions in the global namespace. Without an IIFE, which uses closures and function scope to expose only a limited public-facing API, everything would be accessible from the calling code. Sometimes you want to define a factory or a utility function that logically belongs to your class. In Python, you have the @classmethod and @staticmethod decorators, which allow you to associate static methods with the class. To achieve the same result in JavaScript, you need to use the static method modifier: class Color { static brown() { return new Color(244, 164, 96); } static mix(color1, color2) { return new Color(...color1.channels.map( (x, i) => (x + color2.channels[i]) / 2 )); } constructor(r, g, b) { this.channels = [r, g, b]; } } const color1 = Color.brown(); const color2 = new Color(128, 0, 128); const blended = Color.mix(color1, color2); Note that there’s no way of defining static class attributes at the moment, at least not without additional transpiler plugins. Chaining prototypes can resemble class inheritance when you extend one class from another: class Person { constructor(firstName, lastName) { this.firstName = firstName; this.lastName = lastName; } fullName() { return `${this.firstName} ${this.lastName}`; } } class Gentleman extends Person { signature() { return 'Mr. ' + super.fullName() } } In Python, you could extend more than one class, but that it isn’t possible in JavaScript. To reference attributes from the parent class, you can use super(), which has to be called in the constructor to pass arguments up the chain. Decorators Decorators are yet another feature that JavaScript copied from Python. They’re still technically a proposal that is subject to change, but you can test them out using an online playground or a local transpiler. Be warned, however, that they require a bit of configuration. Depending on the chosen plugin and its options, you’ll get different syntax and behavior. Several frameworks already use a custom syntax for decorators, which need to be transpiled into plain JavaScript. If you opt for the TC-39 proposal, then you’ll be able to decorate only classes and their members. It seems there won’t be any special syntax for function decorators in JavaScript. JavaScript Quirks It took ten days for Brendan Eich to create a prototype of what later became JavaScript. After it was presented to the stakeholders at a business meeting, the language was considered production ready and didn’t go through a lot of changes for many years. Unfortunately, that made the language infamous for its oddities. Some people didn’t even regard JavaScript as a “real” programming language, which made it a victim of many jokes and memes. Today, the language is much friendlier than it used to be. Nevertheless, it’s worth knowing what to avoid, since a lot of legacy JavaScript is still out there waiting to bite you. Bogus Array Python’s lists and tuples are implemented as arrays in the traditional sense, whereas JavaScript’s Array type has more in common with Python’s dictionary. What’s an array, then? In computer science, an array is a data structure that occupies a contiguous block of memory, and whose elements are ordered and have homogeneous sizes. This way, you can access them randomly with a numerical index. In Python, a list is an array of pointers that are typically integer numbers, which reference heterogeneous objects scattered around in various regions of memory. Note: For low-level arrays in Python, you might be interested in checking out the built-in array module. JavaScript’s array is an object whose attributes happen to be numbers. They’re not necessarily stored next to each other. However, they keep the right order during iteration. When you delete an element from an array in JavaScript, you make a gap: > const fruits = ['apple', 'banana', 'orange']; > delete fruits[1]; true > console.log(fruits); ['apple', empty, 'orange'] > fruits[1]; undefined The array doesn’t change its size after the removal of one of its elements: > console.log(fruits.length); 3 Conversely, you can put a new element at a distant index even though the array is much shorter: > fruits[10] = 'watermelon'; > console.log(fruits.length); 11 > console.log(fruits); ['apple', empty, 'orange', empty × 7, 'watermelon'] This wouldn’t work in Python. Array Sorting Python is clever about sorting data because it can tell the difference between element types. When you sort a list of numbers, for example, it’ll put them in ascending order by default: >>> sorted([53, 2020, 42, 1918, 7]) [7, 42, 53, 1918, 2020] However, if you wanted to sort a list of strings, then it would magically know how to compare the elements so that they appear in lexicographical order: >>> sorted(['lorem', 'ipsum', 'dolor', 'sit', 'amet']) ['amet', 'dolor', 'ipsum', 'lorem', 'sit'] Things get complicated when you start to mix different types: >>> sorted([42, 'not a number']) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '<' not supported between instances of 'str' and 'int' By now, you know that Python is a strongly typed language and doesn’t like mixing types. JavaScript, on the other hand, is the opposite. It’ll eagerly convert elements of incompatible types according to some obscure rules. You can use .sort() to do the sorting in JavaScript: > ['lorem', 'ipsum', 'dolor', 'sit', 'amet'].sort(); ['amet', 'dolor', 'ipsum', 'lorem', 'sit'] It turns out that sorting strings works as expected. Let’s see how it copes with numbers: > [53, 2020, 42, 1918, 7].sort(); [1918, 2020, 42, 53, 7] What happened here is that the array elements got implicitly converted to strings and were sorted lexicographically. To prevent that, you have to provide your custom sorting strategy as a function of two elements to compare, for example: > [53, 2020, 42, 1918, 7].sort((a, b) => a - b); [7, 42, 53, 1918, 2020] The contract between your strategy and the sorting method is that your function should return one of three values: - Zero when the two elements are equal - A positive number when elements need to be swapped - A negative number when the elements are in the right order This is a common pattern present in other languages, and it was also the old way of sorting in Python. Automatic Semicolon Insertion At this point, you know that semicolons in JavaScript are optional because the interpreter will insert them automatically at the end of each instruction if you don’t do so yourself. This can lead to surprising results under some circumstances: function makePerson(name) { return ({ fullName: name, createdAt: new Date() }) } In this example, you might expect the JavaScript engine to insert a missing semicolon at the very end of your function, right after the closing parenthesis of the object literal. However, when you call the function, this happens: > const jdoe = makePerson('John Doe'); > console.log(jdoe); undefined Your function changed the intended action by returning an undefined because two semicolons were inserted instead of one: function makePerson(name) { return; ({ fullName: name, createdAt: new Date() }); } As you can see, relying on the fact that semicolons are optional introduces some risk of errors in your code. On the other hand, it won’t help if you start putting semicolons everywhere. To fix this example, you need to change your code formatting so that the returned value begins on the same line as the return statement: function makePerson(name) { return { fullName: name, createdAt: new Date() }; } In some situations, you can’t rely on automatic semicolon insertion and you need to put one explicitly instead. For example, you can’t leave out the semicolon when you start a new line with a parenthesis: const total = 2 + 3 (4 + 5).toString() This produces a runtime error due to the lack of a semicolon, which makes the two lines collapse into one: const total = 2 + 3(4 + 5).toString(); A numeric literal can’t be called like a function. Confusing Loops Loops in JavaScript are particularly confusing because there are so many of them and they look alike, whereas Python has just two. The primary type of loop in JavaScript is the for loop, which was transplanted from Java: const fruits = ['apple', 'banana', 'orange']; for (let i = 0; i < fruits.length; i++) { console.log(fruits[i]); } It has three parts, all of which are optional: - Initialization: let i = 0 - Condition: i < fruits.length - Cleanup: i++ The first part executes only once before the loop starts, and it typically sets the initial value for the counter. Then, after each iteration, the cleanup part runs to update the counter. Right after that, the condition is evaluated to determine if the loop should continue. This is roughly equivalent to iterating over a list of indices in Python: fruits = ['apple', 'banana', 'orange'] for i in range(len(fruits)): print(fruits[i]) Notice how much work Python does for you. On the other hand, having the loop internals exposed gives you a lot of flexibility. This type of loop is generally deterministic because you know how many times it’ll iterate from the beginning. In JavaScript, you can make the conventional for loop non-deterministic and even infinite by omitting one or more of its parts: for (;;) { // An infinite loop } However, a more idiomatic way to make such an iteration would involve the while loop, which is quite similar to the one you’d find in Python: while (true) { const age = prompt('How old are you?'); if (age >= 18) { break; } } In addition to this, JavaScript has a do...while loop, which is guaranteed to run at least once because it checks the condition after its body. You can rewrite this example in the following way: let age; do { age = prompt('How old are you?'); } while (age < 18); Apart from stopping an iteration midway with the break keyword, you can skip to the next iteration using the continue keyword as you would in Python: for (let i = 0; i < 10; i++) { if (i % 2 === 0) { continue; } console.log(i); } What you can’t do, though, is use the else clause on loops. You might be tempted to try out the for...in loop in JavaScript, thinking it would iterate over values like a Python for loop. Although it looks similar and has a similar name, it actually behaves very differently! A for...in loop in JavaScript iterates over attributes of the given object, including the ones found in the prototype chain: > const object = {name: 'John Doe', age: 42}; > for (const attribute in object) { … console.log(`${attribute} = ${object[attribute]}`); … } name = John Doe age = 42 Should you want to exclude attributes attached to the prototype, you can call hasOwnProperty(). It will test whether a given attribute belongs to an object instance. When you feed the for...in loop with an array, it’ll iterate over the array’s numeric indices. As you know by now, arrays in JavaScript are just glorified dictionaries: > const fruits = ['apple', 'banana', 'orange']; … for (const fruit in fruits) { … console.log(fruit); … } 0 1 2 On the other hand, arrays expose .forEach(), which can substitute for a loop: const fruits = ['apple', 'banana', 'orange']; fruits.forEach(fruit => console.log(fruit)); This is a higher-order function that accepts a callback that will run for every element in the array. This pattern fits a bigger picture since JavaScript takes a functional approach to iteration in general. Note: To test if a single attribute is defined in an object, use the in operator: > 'toString' in [1, 2, 3]; true > '__str__' in [1, 2, 3]; false Finally, when the ES6 specification introduced the iterable and iterator protocols, it allowed the implementation of a long-awaited loop that would iterate over sequences. However, since the for...in name was already taken, they had to come up with a different one. The for...of loop is the closest relative to the for loop in Python. With it, you can iterate over any iterable object, including strings and arrays: const fruits = ['apple', 'banana', 'orange']; for (const fruit of fruits) { console.log(fruit); } This is probably the most intuitive way for a Python programmer to iterate in JavaScript. Constructor Without new Let’s go back to the Person type defined earlier: function Person(name) { this.name = name; this.sayHi = function() { console.log(`Hi, my name is ${this.name}.`); } } If you forget to call that constructor correctly, with the new keyword in front of it, then it’ll fail silently and leave you with an undefined variable: > let bob = Person('Bob'); > console.log(bob); undefined There’s a trick to protect yourself against this mistake. When you omit the new keyword, there won’t be any object to bind to, so the this variable inside the constructor will point to the global object, such as the window object in the web browser. You can detect that and delegate to a valid constructor invocation: > function Person(name) { … if (this === window) { … return new Person(name); … } … this.name = name; … this.sayHi = function() { … console.log(`Hi, my name is ${this.name}.`); … } … } > let person = Person('John Doe'); > console.log(person); Person {name: 'John Doe', sayHi: ƒ} This is the only reason you might want to include a return statement in your constructor. Global Scope by Default Unless you’re already at the global scope, your variables automatically become global when you don’t precede their declarations with one of these keywords: var let const It’s easy to fall into this trap, especially when you’re coming from Python. For example, such a variable defined in a function will become visible outside of it: > function call() { … global = 42; … let local = 3.14 … } > call(); > console.log(global); 42 > console.log(local); ReferenceError: local is not defined Interestingly, the rules determining whether you declare a local or a global variable in Python are much more complicated than this. There are also other kinds of variable scope in Python. Function Scope This quirk is only present in legacy code, which uses the var keyword for variable declaration. You’ve learned that when a variable is declared like that, it won’t be global. But it isn’t going to have a local scope either. No matter how deep in the function a variable is defined, it’ll be scoped to the entire function: > function call() { … if (true) { … for (let i = 0; i < 10; i++) { … var notGlobalNorLocal = 42 + i; … } … } … notGlobalNorLocal--; … console.log(notGlobalNorLocal); … } > call(); 50 The variable is visible and still alive at the top level of the function right before exiting. However, nested functions don’t expose their variables to the outer scope: > function call() { … function inner() { … var notGlobalNorLocal = 42; … } … inner(); … console.log(notGlobalNorLocal); … } > call(); ReferenceError: notGlobalNorLocal is not defined It works the other way around, though. Inner functions can see the variables from the outer scope, but it gets even more interesting when you return the inner function for later use. This creates a closure. Hoisting This one is related to the previous quirk and, again, applies to variables declared with the infamous var keyword. Let’s begin with a little riddle: var x = 42; function call() { console.log(x); // A = ??? var x = 24; console.log(x); // B = ??? } call(); console.log(x); // C = ??? Consider what the result will be: - A = 42, B = 24, C = 42 - A = 42, B = 24, C = 24 - A = 24, B = 24, C = 42 - A = 24, B = 24, C = 24 SyntaxError: Identifier 'x' has already been declared While you’re thinking about your answer, let’s take a closer look at what hoisting is. In short, it’s an implicit mechanism in JavaScript that moves variable declarations to the top of the function, but only those that use the var keyword. Keep in mind that it moves declarations rather than definitions. In some programming languages, like C, all variables must be declared at the beginning of a function. Other languages, such as Pascal, go even further by dedicating a special section for variable declarations. JavaScript tried to mimic that. Alright, ready? The correct answer is none of the above! It’ll print the following: - A = undefined - B = 24 - C = 42 To clarify the reasoning behind these results, you can manually perform the hoisting that JavaScript would do in this situation: var x = 42; function call() { var x; console.log(x); // A = undefined x = 24; console.log(x); // B = 24 } call(); console.log(x); // C = 42 The global variable is temporarily masked by the local variable because the name lookup goes outward. The declaration of x inside the function is moved up. When a variable is declared but not initialized, it has the undefined value. Variables aren’t the only construct that is affected by hoisting. Normally, when you define a function in JavaScript, you can call it even before its definition: call(); // Prints "hello" function call() { console.log('hello'); } This won’t work in an interactive shell where individual pieces of code are evaluated immediately. Now, when you declare a variable using the var keyword and assign a function expression to that variable, it’ll be hoisted: call(); // TypeError: call is not a function var call = function() { console.log('hello'); }; As a result, your variable will remain undefined until you initialize it. Illusory Function Signatures Function signatures don’t exist in JavaScript. Whichever formal parameters you declare, they have no impact on function invocation. Specifically, you can pass any number of arguments to a function that doesn’t expect anything, and they’ll just be ignored: > function currentYear() { … return new Date().getFullYear(); … } > currentYear(42, 'foobar'); 2020 You can also refrain from passing arguments that are seemingly required: > function truthy(expression) { … return !!expression; … } > truthy(); false Formal parameters serve as a documentation and allow you to refer to arguments by name. Otherwise, they’re not needed. Within any function, you have access to a special arguments variable, which represents the actual parameters that were passed: > function sum() { … return [...arguments].reduce((a, x) => a + x); … } > sum(1, 2, 3, 4); 10 arguments is an array-like object that is iterable and has numeric indices, but unfortunately it doesn’t come with .forEach(). To wrap it in an array, you can use the spread operator. This used to be the only way of defining variadic functions in JavaScript before the rest parameter in ES6. Implicit Type Coercion JavaScript is a weakly typed programming language, which is manifested in its ability to cast incompatible types implicitly. This can give false positives when you compare two values: if ('2' == 2) { // Evaluates to true In general, you should prefer the strict comparison operator ( ===) to be safe: > '2' === 2; false > '2' !== 2; true This operator compares both the values and the types of their operands. No Integer Type Python has a few data types to represent numbers: int float complex The previous Python generation also had the long type, which was eventually merged into int. Other programming languages are even more generous, giving you fine-grained control over memory consumption, value range, floating-point precision, and the treatment of sign. JavaScript has just one numeric type: the Number, which corresponds to Python’s float data type. Under the hood, it’s a 64-bit double-precision number that conforms to the IEEE 754 specification. This was simple and sufficient for early web development, but it can cause a few problems today. Note: To get the integer part of a floating-point number in JavaScript, you can use the built-in parseInt(). First of all, it’s remarkably wasteful in most situations. If you were to represent pixels of a single FHD video frame with JavaScript’s Number, then you’d have to allocate about 50 MB of memory. In a programming language with support for a stream of bytes, such as Python, you’d need a fraction of that amount of memory. Secondly, floating-point numbers suffer from a rounding error due to how they’re represented in computer memory. As such, they’re unsuitable for applications requiring high precision, such as monetary calculations: > 0.1 + 0.2; 0.30000000000000004 They’re unsafe in representing very big and very small numbers: > const x = Number.MAX_SAFE_INTEGER + 1; > const y = Number.MAX_SAFE_INTEGER + 2; > x === y; true But that’s not the worst part. After all, computer memory is getting cheaper by the day, and there are ways to circumvent the rounding error. When Node.js became popular, people started using it to write back-end applications. They needed a way of accessing the local file system. Some operating systems identify files by arbitrary integer numbers. Occasionally, these numbers wouldn’t have an exact representation in JavaScript, so you couldn’t open the file, or you’d read some random file without knowing. To address the problem of handling big numbers in JavaScript, there’s going to be another primitive type that can reliably represent integer numbers of any size. Some web browsers already support this proposal: > const x = BigInt(Number.MAX_SAFE_INTEGER) + 1n; > const y = BigInt(Number.MAX_SAFE_INTEGER) + 2n; > x === y; false Since you can’t mix the new BigInt data type with regular numbers, you have to either wrap them or use a special literal: > typeof 42; 'number' > typeof 42n; 'bigint' > typeof BigInt(42); 'bigint' Apart from that, the BigInt number will be compatible with two somewhat-related typed arrays for signed and unsigned integers: BigInt64Array BigUint64Array While a regular BigInt can store arbitrarily large numbers, the elements of these two arrays are limited to just 64 bits. null vs undefined Programming languages provide ways to represent the absence of a value. Python has None, Java has null, and Pascal has nil, for example. In JavaScript, you get not only null but also undefined. It may seem odd to have more than one way to represent missing values when one is already too much: I call it my billion-dollar mistake. It was the invention of the null reference in 1965. (…) This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. — Tony Hoare The difference between null and undefined is quite subtle. Variables that are declared but uninitialized will implicitly get the value of undefined. The null value, on the other hand, is never assigned automatically: let x; // undefined let y = null; At any time, you can manually assign the undefined value to a variable: let z = undefined; This distinction between null and undefined was often used to implement default function arguments before ES6. One of the possible implementations was this: function fn(required, optional) { if (typeof optional === 'undefined') { optional = 'default'; } // ... } If—for whatever reason—you wanted to keep an empty value for the optional parameter, then you couldn’t pass undefined explicitly because it would get overwritten by the default value again. To differentiate between these two scenarios, you would pass a null value instead: fn(42); // optional = "default" fn(42, undefined); // optional = "default" fn(42, null); // optional = null Apart from having to deal with null and undefined, you may sometimes experience a ReferenceError exception: > foobar; ReferenceError: foobar is not defined This indicates that you’re trying to refer to a variable that hasn’t been declared in the current scope, whereas undefined means declared but uninitialized, and null means declared and initialized but with an empty value. Scope of this Methods in Python must declare a special self parameter unless they’re static or class methods. The parameter holds a reference to a particular instance of the class. Its name can be anything because it’s always passed as the first positional argument. In JavaScript, like in Java, you can take advantage of a special this keyword, which corresponds to the current instance. But what does current instance mean? It depends on how you invoke your function. Recall the syntax for object literals: > let jdoe = { … name: 'John Doe', … whoami: function() { … console.log(this); … } … }; > jdoe.whoami(); {name: "John Doe", whoami: ƒ} Using this in a function lets you refer to a particular object that owns that function without hard-coding a variable name. It doesn’t matter if the function is defined in place as an anonymous expression or if it’s a regular function like this one: > function whoami() { … console.log(this); … } > let jdoe = {name: 'John Doe', whoami}; > jdoe.whoami(); {name: "John Doe", whoami: ƒ} What matters is the object that you’re calling the function on: > jdoe.whoami(); {name: "John Doe", whoami: ƒ} > whoami(); Window {…} In the first line, you call whoami() through an attribute of the jdoe object. The value of this is the same as the jdoe variable in that case. However, when you call that same function directly, this becomes the global object instead. Note: This rule doesn’t apply to a constructor function invoked with the new keyword. In such a case, the function’s this reference will point to the newly created object. You can think of JavaScript functions as methods attached to the global object. In a web browser, window is the global object, so in reality, the code snippet above is short for this: > jdoe.whoami(); {name: "John Doe", whoami: ƒ} > window.whoami(); Window {…} Do you see a pattern here? By default, the value of this inside a function depends on the object sitting in front of the dot operator. As long as you control how your function gets called, everything will be fine. It becomes a problem only when you don’t call the function yourself, which is a common case for callbacks. Let’s define another object literal to demonstrate this: const collection = { items: ['apple', 'banana', 'orange'], type: 'fruit', show: function() { this.items.forEach(function(item) { console.log(`${item} is a ${this.type}`); }); } }; collection is a collection of elements that have a common type. Currently, .show() doesn’t work as expected because it doesn’t reveal an element type: > collection.show(); apple is a undefined banana is a undefined orange is a undefined Although this.items correctly refers to the array of fruits, the callback function seems to have received a different this reference. That’s because the callback isn’t tied to the object literal. It’s as if it were defined elsewhere: function callback(item) { console.log(`${item} is a ${this.type}`); } const collection = { items: ['apple', 'banana', 'orange'], type: 'fruit', show: function() { this.items.forEach(callback); } }; The most straightforward way to work around this would be to replace this in the callback with a custom variable or a constant: const collection = { items: ['apple', 'banana', 'orange'], type: 'fruit', show: function() { const that = this; this.items.forEach(function(item) { console.log(`${item} is a ${that.type}`); }); } }; You persist the value of this in a local constant, which the callback then refers to. Since the callback is defined as an inner function, it can access variables and constants from the outer scope. This wouldn’t have been possible with a stand-alone function. Now the result is correct: > collection.show(); apple is a fruit banana is a fruit orange is a fruit This pattern is so common that .forEach() accepts an optional parameter for substituting this in the callback: const collection = { items: ['apple', 'banana', 'orange'], type: 'fruit', show: function() { this.items.forEach(function(item) { console.log(`${item} is a ${this.type}`); }, this); } }; collection.show(); This is more elegant than the custom hack that you saw before, and it’s also more universal because it lets you pass a regular function. While not every built-in method in JavaScript is so gracious, there are three more ways to tinker with the this reference: .apply() .bind() .call() These are methods available on function objects. With .apply() and .call(), you can invoke a function while injecting arbitrary context to it. They both work the same way but pass arguments using different syntaxes: > function whoami(x, y) { … console.log(this, x, y); … } > let jdoe = {name: 'John Doe'}; > whoami.apply(jdoe, [1, 2]); {name: "John Doe"} 1 2 > whoami.call(jdoe, 1, 2); {name: "John Doe"} 1 2 Of the three, .bind() is the most powerful because it allows you to permanently change the value of this for future invocations. It works a bit differently as it returns a new function that is bound to the given context: > const newFunction = whoami.bind(jdoe); > newFunction(1, 2); {name: "John Doe"} 1 2 > newFunction(3, 4); {name: "John Doe"} 3 4 This can be useful in solving the earlier problem of the unbound callback: const collection = { items: ['apple', 'banana', 'orange'], type: 'fruit', show: function() { this.items.forEach(callback.bind(this)); } }; Nothing that you’ve just read about this in JavaScript applies to arrow functions in ES6. That’s good, by the way, because it removes a lot of ambiguity. There’s no context binding in arrow functions, nor is there an implicit this reference available. Instead, this is treated as an ordinary variable and is subject to lexical scoping rules. Let’s rewrite one of the previous examples using arrow functions: > const collection = { … items: ['apple', 'banana', 'orange'], … type: 'fruit', … show: () => { … this.items.forEach((item) => { … console.log(`${item} is a ${this.type}`); … }); … } … }; > collection.show(); TypeError: Cannot read property 'forEach' of undefined If you trade both functions for their arrow counterparts, then you’ll get an exception because there’s no this variable in the scope anymore. You might keep the outer function so that its this reference can be picked up by the callback: const collection = { items: ['apple', 'banana', 'orange'], type: 'fruit', show: function() { this.items.forEach((item) => { console.log(`${item} is a ${this.type}`); }); } }; collection.show(); As you can see, arrow functions aren’t a complete replacement for traditional functions. This section was merely the tip of the iceberg. To find out more about quirky JavaScript behaviors, take a look at tricky code examples, which are also available as an installable Node.js module. Another great source of wisdom is Douglas Crockford’s book JavaScript: The Good Parts, which has a section devoted to the bad and awful parts as well. What’s Next? As a Pythonista, you know that mastering a programming language and its ecosystem is only the beginning of your path to success. There are more abstract concepts to grasp along the way. Document Object Model (DOM) If you’re planning to do any sort of client-side development, then you can’t escape getting familiar with the DOM. Note: You might have used the same DOM interface before to handle XML documents in Python. To allow for manipulating HTML documents in JavaScript, web browsers expose a standard interface called the DOM, which comprises various objects and methods. When a page loads, your script can gain access to the internal representation of the document through a predefined document instance: const body = document.body; It’s a global variable available to you anywhere in your code. Every document is a tree of elements. To traverse this hierarchy, you can start at the root and use the following attributes to move in different directions: - Up: .parentElement - Left: .previousElementSibling - Right: .nextElementSibling - Down: .children, .firstElementChild, .lastElementChild These attributes are conveniently available on all elements in the DOM tree, which would be perfect for recursive traversal: const html = document.firstElementChild; const body = html.lastElementChild; const element = body.children[2].nextElementSibling; Most attributes will be null if they don’t lead to an element in the tree. The only exception is the .children property, which always returns an array-like object that can be empty. Frequently, you won’t know where an element is. The document object, as well as every other element in the tree, has a few methods for element lookup. You can search elements by tag name, ID attribute, CSS class name, or even using a complex CSS selector. You can look for one element at a time or multiple elements at once. For example, to match elements against a CSS selector, you’d call one of these two methods: .querySelector(selector) .querySelectorAll(selector) The first one returns the first occurrence of the matching element or null, while the second method always returns an array-like object with all the matching elements. Calling these methods on the document object will cause the entire document to be searched. You can restrict the scope of the search by calling the same methods on a previously found element: const div = document.querySelector('div'); // The 1st div in the whole document div.querySelectorAll('p'); // All paragraphs inside that div Once you have a reference to an HTML element, you can do a bunch of things with it, such as: - Attach data to it - Change its style - Change its content - Change its placement - Make it interactive - Remove it altogether You can also create new elements and add them to the DOM tree: const parent = document.querySelector('.content'); const child = document.createElement('div'); parent.appendChild(child); The most challenging part about using DOM is getting skilled at building accurate CSS selectors. You can practice and learn using one of many interactive playgrounds available online. JavaScript Frameworks The DOM interface is a set of primitive building blocks for creating interactive user interfaces. It gets the job done, but as your client-side code grows, it becomes increasingly difficult to maintain. The business logic and the presentation layer start to intersect, violating the separation of concerns principle, while code duplication piles up. Web browsers add fuel to the fire by not having a unified interface across the board. Sometimes, a feature that you seek to use isn’t available on all major browsers or is implemented in different ways. To ensure consistent behavior, you need to include an appropriate boilerplate code that hides the implementation details. To deal with these problems, people started sharing JavaScript libraries that encapsulated common patterns and made the browser API a little less jarring. The most popular library of all time by far is jQuery, which until recently dwarfed today’s front-end frameworks in popularity: Although this isn’t a completely honest apples-and-oranges comparison, it shows just how popular jQuery used to be. In peak time, it had more hits than all the other major frameworks and libraries combined. Despite being slightly old and unfashionable, it’s still used in a lot of legacy projects and sometimes even in recent ones. Note: A library contains low-level utility functions that you can call, whereas a framework has total control over your code life cycle. A framework also imposes a certain application structure and is heavyweight in comparison with a library. jQuery has good documentation and a minimal API as there’s only one function to remember, the versatile $(). You can use that function to create and search for elements, change their style, handle events, and much more. Modern web browsers are much better in terms of consistency and support for emerging web standards. So much so, in fact, that some people choose to develop client-side code in vanilla JavaScript without the help of any front-end framework. So why do most people tend to use a front-end framework, anyway? There are pros and cons to using frameworks, but the biggest advantage seems to be that they allow you to operate on a higher level of abstraction. Instead of thinking in terms of document elements, you can build self-contained and reusable components, which increases your productivity as a programmer. That, combined with declarative state management provided by the framework, allows you to tackle the complexity of a highly interactive client-side JavaScript application. If you insist on not using a JavaScript framework for long enough, then you typically end up unwittingly building your own. As for which popular framework to choose, that depends on your goal. To stay relevant, you should invest time into learning pure JavaScript first since front-end frameworks come and go notoriously fast. If you’re looking to land a job as a front-end developer or even as a full-stack engineer, then you should take a look at the job descriptions. Chances are they’ll expect someone with experience in one of these frameworks: At the time of writing this article, these were arguably the most popular JavaScript frameworks. Conclusion In this tutorial, you learned about JavaScript’s origins, its alternatives, and where the language is headed. You compared Python vs JavaScript by taking a closer look at their similarities and differences in syntax, runtime environments, associated tools, and jargon. Finally, you learned how to avoid fatal mistakes in JavaScript. You’re now able to: - Compare Python vs JavaScript - Choose the right language for the job - Write a shell script in JavaScript - Generate dynamic content on a web page - Take advantage of the JavaScript ecosystem - Avoid common pitfalls in JavaScript Try using Node.js for your next scripting project or building an interactive client-side application in the web browser. When you combine what you’ve just learned with your existing Python knowledge, the sky’s the limit. JavaScript has single-line as well as multiline comments: You can start a comment anywhere on a line using a double slash ( //), which is similar to Python’s hash sign ( #). While there are no multiline comments in Python, you can simulate them by enclosing a fragment of code within a triple quote ( ''') to create a multiline string. Alternatively, you can wrap it in an ifstatement that never evaluates to True: You can use this trick, for example, to temporarily disable an existing block of code during debugging.
https://realpython.com/python-vs-javascript/
CC-MAIN-2021-17
refinedweb
16,885
53.41
Svante Signell, le Fri 15 Apr 2011 15:55:49 +0200, a écrit : > Updated, see patch. What kind of leak is this? path is overwritten by a > new malloced array. And so the old pointer is lost and will never be freed. > Are the potential problems coming from that malloc does not clear the > memory allocated, There is no reason why it should clear previous memory. The malloc function can not divine which variable the result will be assigned to! > @@ -3868,7 +3872,16 @@ > eplist = CINDEX(name, '/') ? nullep : epaths; > > for (ep = eplist; *ep; ep++) { > +#ifndef __GNU__ > + free (path); > (void)strcpy(path, *ep); > +#else > + if (path = strdup(*ep) == NULL) { > + pvmlogerror("cannot allocate memory\n"); > + task_free(tp); > + return PvmNoFile; > + } > +#endif Err, no, the free should be before strdup, not before strcpy! Samuel
https://lists.debian.org/debian-hurd/2011/04/msg00176.html
CC-MAIN-2020-50
refinedweb
130
73.27
: Hey Alex, Aren't unidirectional associations similar to aggregation. How can be differentiate them. The parent in an aggregation owns the relationship with the child. If the parent dies, the children die too. In an association, this isn't true. in the doctor patient example, if we delete doctor james, then let the program to print doctor-patient relation, the output will still be correct? No, if you delete the Doctor and then try to print the Doctor-Patient relationship, the program will exhibit undefined behavior, because you'll be accessing memory that has been deleted. When the Doctor is deleted, you also need to ensure that all pointers to the Doctor are removed. This can be a bit of a management challenge. Fortunately, C++ provides a class that can help with this: std::shared_ptr, which we cover in chapter 15. ." Can you please give a code example of what the other way would look like. I tried to make it myself but I am still confused on the way to set it up. Hi Maxpro! I have a question about your design for the Patient/Doctor classes. Isn't it more appropriate to allow the patients to choose their doctors and no the other way around? Not really -- at least in the US, patients can decide what doctors they'd like to see, but ultimately it's up to the doctors to decide if they want to see the patient or not (some doctors are full and not accepting new patients). But, for what it's worth, the code would support patient's adding doctors if that made more sense for your use case. First of all, these tutorials are good stuff. This is probably the wrong lesson to ask these questions, but the car lot example brought them to mind. Is a pure static class just a namespace? As in, is equivalent to Actually, I realised while writing this that namespaces can't have private members, but are there any other differences? Secondly, is and is that why it's called an enum class? Static classes and namespaces are kind of similar. There's a reasonable discussion of when to use which one in stack overflow. An enum class and a class with a public enum aren't the same, though the usage of both would be similar. I haven't read any anecdotes to indicate whether the naming of an "enum class" is related to a having a class with an enum, or whether it was just trying to save on adding new keywords. Maybe both. I have seen in this nad 10.3 chapter that contrarry to what is taught in chapter on overloading operator<<(we just do friend std::ostream& operator<<... foward declaration and then define it outside class) friend std::ostream& operator<< is defind here, in class(by defined I mean adding {....}). Where to use each of these two techniques? Thanks for grad tutorial! Generally it's better to define your functions outside the class (in a separate .cpp file). In most of the tutorials here, we do it inside the class to keep the examples concise and make it easier for you to try yourself. In first example I can't figure out where "m_patient.push_back(pat)" comes from, specificaly where "push_back" is defined. Can anyone pleas help me with that? It's part of the std::vector functionality. See and search for Push_back. "friend Doctor;" --> "friend class Doctor;"? friend Doctor is okay in C++11 and newer. But friend class Doctor works everywhere, so I've updated the lesson accordingly. 1.Could you explain why both the vector types are pointers ( vector< Doctor*>...)? 2. Furthermore, could you please elaborate how this step works? {pat->addDoctor(this);} 1) If the vectors didn't contain pointers to Doctors, then the vector would manage the existence of the Doctor. Having the vector hold pointers to Doctors means the vector only owns that pointer, not the Doctor itself. 2) When addPatient(pat) is called, "this" points to the Doctor the Patient is being added to, and "pat" points to the Patient. Calling pat->addDoctor() should be obvious -- we're calling the Patient::addDoctor() member function on Patient pat. Passing in "this" gives us a way to pass the implicit Doctor object from the Doctor::addPatient() function to the Patient::addDoctor() function. 1) What if you made the vectors hold actual Doctor- and Patient-objects (rather than pointers), then made the addPatient- and addDoctor-functions take references to such objects as parameters? Then the Doctor- and Patient-objects would still exist independently of one another and the vectors would only manage the existence of the references, right? Wouldn't that work just as well as using pointers (in terms of functionality at least, I don't know about performance)? > What if you made the vectors hold actual Doctor- and Patient-objects (rather than pointers) Then you'd end up with a lot of duplicate copies of Doctors and Patients. For example, Patient's Dave and Betsy would both have an independent copy of Doctor Scott. But that's a bit weird, since the Patient's don't "own" the doctors. If we wanted to update Scott's information (e.g. his age, or specialties), we'd have to ensure all the copies got updated. So no, it wouldn't work as well. Even if we added references to the original doctors/patients to the vectors, rather than copies? Like this: So here I made m_patient into a vector of Patient-objects rather than pointers, and made the addPatient-function take a Patient-reference (&pat) rather than a pointer to a Patient, and then this reference is added to the vector. If I now define p1 and d1 as objects rather than pointers and then pass p1 as an argument into d1's addPatient-function, like so: wouldn't this result in a reference to p1 being added to d1.m_patients, and not a copy? So if any changes are made to p1 in the future, those changes will automatically apply to the reference inside d1.m_patients as well? So there wouldn't be any issue with multiple copies of Patients and Doctors, nor any issue with making sure copies are updated properly? And of course I mean for the Patient-class to do the same, by having a vector of Doctor-objects and by having its addDoctor-function take a reference to the Doctor as parameter. Maybe I'm missing something here, but hopefully you can see where I'm coming from. The references just prevent the Doctor or Patient from being copied when you pass it to the add function. Your std::vectors will still hold copies. In order to do what you're suggesting, you'd have to create arrays of references. But C++ don't allow you to do this. So we have to use arrays of pointers. Aaaah, I see, so when I pass the Patient to the addPatient-function, it will be passed as a reference and not a copy, but when I try to add that reference to m_patients, it actually makes a copy of the reference and adds that to the vector? So the vector doesn't actually hold a reference to the original Patient, but a copy that was copied from the reference? That makes sense, thank you! Yes, exactly that! In the deleted constructor is made private. Since nothing is supposed to call it, wouldn't it be better to make it public? My point is, if you try to call it, the compiler will complain that's illegal because of it being private, rather than your intention of not using it. What are other side effects of making the deleted constructor private/public? (I have read something about static functions accessing it or not at StackOverflow but was a poor explanation. That's why I love this tutorial it explains everything so well it makes me actually want to take my phone and keep learning at any moment as if it was an addictive phone game :) Yes, what you're saying makes sense -- because we don't want people creating CarLot() objects, if we make the deleted constructor public, the compiler will give a clearer warning that this is disallowed rather than being masked by the private access control. I've updated the example. Nice thinking! Static members (such as getCar()) can still access the static members, but that's what we're desiring in this case. Thank you Alex Please help.I don't get it this pat.m_doctor[count]->getName() and doc.m_patient[count]->getName() Here, are we not already mentioned the name. ( doc.m_patient[count] = "Dave" ? Why is that we use ->getname(). doc.m_patient[count] returns a Doctor object. We need to use the getName() member function to get the Doctor's name from it. For example I wrote this code.Can you explain me please ? If I add .getName at the end of the a.array[count] ı get an error.Normally it works well I know yours is different but I don’t get it exactly what I don’t understand. #include <vector> #include <string> #include <iostream> class A{ private: std::string m_name; std::vector<std::string> array; public: A(){} std::string getName(){return m_name;} void AddName(std::string a){ array.push_back(a); } friend std::ostream& operator<<(std::ostream &out,A &a){ int length = a.array.size(); for (int count = 0; count < length ; ++count) out << a.array[count] << std::endl; return out; } }; int main(){ A object; object.AddName("a"); object.AddName("b"); std::cout << object << std::endl; return 0; } Ok man I got it thank you. :) Your vector is a vector of std::string, so you can insert and modify strings directly. My vector was a vector of some class containing a std::string. Because the std::string was contained inside the class object, we needed to use a member function to retrieve it. "Consider the simplified case where a Course can only have prerequisite." should be replaced with: "Consider the simplified case where a Course can only have one prerequisite." Quite right. Thanks! Hi Alex, I wounder in exempel patient and doctor. let's asume the program doesnt end when u delete patient and doctor. then the std::vector(*patient/doctor) would be left with dangling point so u need to set them to nullptr, right? Is there a way to pop_back a specific of them? lets asume i wanna just take away p2? You're correct, if we deleted any of the Doctors or Patients without removing them from the vectors, the vectors would be holding dangling pointers. That's not a problem here, because we don't do the deletes until the program is ending anyway. But it's a good question in general. While moving the last element of a vector is easy (use pop_back), removing an arbitrary element from a vector is not straightforward. But you can do so like this: Hi Alex! Thanks for ur great respond!:) I had problem with the one u used and found another that worked vec.erase(vec.begin() + i); // where i is the possition of the vector! I got one more follow up question. let's say Dave (patient) change doctor so Scott is no longer having him ( let's just focus on so we just wanna delete from doctor m_patient). So i wanna make a function that give Doctor option to write a name and it will delete it from vector ( just wanna know how it works) lets say i got a std::cin>> deletePatient; and he types "Dave" how do i search in the vector for "Dave" possition which is m_patient[0] so i can put it in vec.erase and delete him from vector? Many thanks really good learning page:) well i meant the order should be like this: 1. First delete Dave from heap memory. 2. point Dave to nullptr. 3. then take it out from the std::vector through vec.erase right? No. First remove Dave from the vector, then delete it. If you delete Dave before removing it from the vector, the vector will be pointing at deallocated memory, which means when you go to see if the element is Dave, then you'll be accessing deallocated memory, which will cause undefined behavior. Thanks for making it clear. I wounder if i did vector.push_back(new Patient("Dave")). and if i did erase from vector first can i still access it so i can delete it? like in last quiz in 12.x after we add all those circle and triangle and then we deleted them. should we not pop_back so the vector is empty? Yes, if you push_back a Patient, you can pop_back that same Patient off the stack and then delete it. If we intended to continue using the vector, we'd definitely want to get rid of all the stuff we'd deleted manually. However, since the program is ending anyway, it doesn't matter. The vector will clean up after itself (note: it will not delete the pointers, which is good, since we've already done that manually). This is also more difficult than it seems like it should be. See these answers for some various ways to do this. Ok, i really didnt understand that, in the "old_name_" do i put "Dave" ? Yes, if you have the pointer pointing to the Dave object. If not (and you only know the name "Dave") then things are even more complicated. See this thread. Regarding: "We'll implement this function below Doctor since we need Doctor to be defined at that point." Consider: "We'll *define* this function below Doctor *definition* since we need Doctor to be defined *already for this function to be successfully defined*." Thanks for the suggestion. Text updated. Regarding: "They should use Doctor::addPatient() instead, which is publicly exposed" Accurate but does not explicitely expose to the "unseasoned" the association scheme to be used. Consider something of the sort: "We plan the association patient-doctor to occur at the same place where the association doctor-patient occurs. Thus when Doctor::addPatient(...) will be launched, it will launch Patient::addDoctor(...) appropriately so that the two associations will be properly implemented. For this scheme to work Patient::addDoctor(...) needs to be visible to Doctor objects (only, thus not being public) as will be arranged for below through appropriate befriending." Is this correct? Yep! Alex Bro I love you. I learned a lot from this site. thank you senpai Alex would you please explain how we might use that Course/prerequisite example? I'm interested in its design... It has no string members for example! It probably makes more sense for each course to have a name (or numeric id) so we can differentiate them. I've updated the example to include a name member for each course. please I need clarification on how the following lines of codes executes please I can't move on without understanding the above. Thanks in advance The data looks like this: d1->m_patient = ["Dave"] d2->m_patient = ["Dave", "Besty"] p1->m_doctor = ["James", "Scott"] p2->m_doctor = [] p3->m_doctor = ["Scott"] The code you pasted is from a Doctor member function, where doc is set to whichever doc we called (*d1 or *d2). It looks through all of the patients for that Doctor and prints their names. The Patient code works similarly, but iterates through each Patient's Doctors. In the examples above (and in the previous chapter), why do we need to use pointers for some of the objects? Hi, I have a question. On line 82 how come you do not include the Class Name Patient to the overloaded operator << like the way you did the ? Is it not required in this context - if not how come? Because operator<< is a friend function. Friend functions aren't considered members of the class. Grammar fix (plural doctors doesn't quite make sense): Change "The relationship between a doctors and patients is a great example of an association." To: "The relationship between a doctor and its patients is a great example of an association." Fixed, thanks! Is there a reason you used a normal for loop rather than a for each loop? No particular reason. Typo fixed, and good idea. Lesson updated. Under the "Reflexive Association" header, in the first sentence you accidentally called it "reflective" association. Great tutorials, thanks! Also, in the indirect association example (cars in a lot) it may be more clear to say: CarLot::getCar(d.getCarId()); ...rather than hard-coding the "17" again, to demonstrate how the Driver object is associated with the Car object. Hi Alex, I'm so lucky to find ur tutorials. I wanted to know if there is any difference between aggregation and association other than the direction. Thanks In terms of how aggregation and associations are implemented, there's usually little difference in C++. The differences between the two are mostly conceptual. Hi Alex, Can you please explain the syntax in line 28 of the Doctor-Paitent code : [code] std::string getName() const [code] i cannot seem to understand the role of 'const' at the end of function getName() thanks in advance Const in this context means getName() is a const member function -- that is, getName() promises not to modify any of the member variables, or call any non-const functions. Const objects can only call const member functions (and constructors/destructors). Hello Alex First of all, thank you for your generosity on sharing your tremendous knowledge of c++. There is one thing, regarding the DOCTOR and PATIENT program, I am not sure about. I copied and pasted part that I don't know void addPatient(Patient *pat) { // Our doctor will add this patient m_patient.push_back(pat); // and the patient will also add this doctor pat->addDoctor(this); } how does that pointer-to-member operator, -> , work here? because later you write like following in your program d1->addPatient(p1); d2->addPatient(p1); d2->addPatient(p3); Can you explain the mechanism behind there with this pointer-to-member operator? Thank you! Remember that "a->b" is the same as "(*a).b". So, when we say pat->addDoctor(this), we're really just saying, "Get the Patient that pointer pat is pointing at, and then call member function addDoctor with argument this." See lesson 6.12 for more info. Thanks Alex! Typo at the beginning: "quality" should be "qualify" I think. Yup, thanks for catching that. Typo at the begininng, after the "Association" sub-heading, first sentence... I think "and" should be "an". Also, your overloaded operator<< functions for both Doctor and Patient use "std:cout" instead of "out". Thanks, all updated! Alex, You still have "std::cout" instead of "out" in one of the operator<< functions. Thanks, fixed! Great explanation , thank you :)) Name (required) Website
https://www.learncpp.com/cpp-tutorial/10-4-association/comment-page-1/
CC-MAIN-2019-13
refinedweb
3,146
65.32
Bounds vrs Frame Center etc in ui @omz, are the concepts of frames and bounds , Center not 100% correct? Look if it's a little broken that's ok. But I am never sure if it is just me, oR I just don't understand your coord system. Maybe there is some missing documentation. But to me, center for example seems wrong. If I set a buttons center to the Center of its view, it's off by the menu height / 2. The magic 44 number. But it would be great to know if it still has kinks or I just misunderstand how it works. This post is deleted!last edited by @JonB , I seen you deleted your post. But what you said was correct about what I was doing. But I have asked this question many times before in different ways. But it's the chicken and egg syndrome, which comes first. When presenting a view in my mind, needs to be some sort of pre-flight. As the the sizes that are eventually displayed can be very different from device to device. But I don't see a way around it. Something like bounds = v.present('preflight') or can be implemented many different ways. But as far as I can see, its required unless their are some tricks I am unaware of. But I normally do like this import ui class Tester(ui.View): def __init__(self, frame): self.frame = frame self.background_color = 'white' btn = ui.Button(title = 'test button') btn.width = 100 btn.center = self.center self.add_subview(btn) if __name__ == '__main__': f = (0,0,500,500) t = Tester(frame = f) t.present('sheet') If I do it after the present will be ugly. I think you can turn off updates, but I am sure not the way it was meant to work You want btn.center = (self.bounds.center()) (this only works in beta, where bounds returns a Rect) View.center is the same as View.frame.center. So saying btn.center=self.center only makes sense if these are both subviews of the same parent view. If btn is a subview of self, you must set btn's frame center (aka btn.center) equal to self's bounds.center(). Set your button height to be nearly equal to the frame height to see that this actually worked (I wasnt sure at first) Note that when presenting as a sheet, there is (or was... not sure if I retested this one recently) a bug where very small or very large sheets did not honor the frame size (frame got changed after presenting). Unless you set up flex, you need to be careful trying to set absolute coordinates prior to presenting. Strange.. the deleted post was a leftover from a different thread. Not sure if you actually saw the real post or not. Your problem is the self.centerline. The only reason your code sort of works is because you happen to have zero offset between t and f. if you added your tester as a subview in a different location, it won't work.... because frame is expressent in superview coordinates, and bounds is expressed in local coordinates. btn.center = (self.bounds.center()) is all you need from your original code. Thus, you se btn' frame center (in self's coordinate system) to self's center (in self's coordinate system). The other way, tou are trying to set btn's position in self's coordinate system to self's center in t's coordinate system, meaning it will get an offset twice. @JonB , ok. But I will still say it's broken. It's not a problem that is broken. But, I don't think it was meant to be like this. I think it's crazy you can't reply on stuff before presenting. It's like a begin transaction end transaction block, or it should be. Only one thing missing, a preflight method/function/param to present. If that was present, returning the real bounds that it will when the actual .present is called, all these issues go away. in your case, your frame was not being resized, it is being shifted. If it sidnt, your view would be presented at the upper left corner, not centered in the view. Your fix only works because you are misunderstanding self.center. @JonB , ok I will take some time to reflect a little and try and get my head around it. Maybe I will get the Eureka moment. I hope so. Frustrates the hell out of me. But I am always open I am just wrong as it turns out many times I am. Actually, I guess the confusion is, perhaps, that the title bar is not part of your view's height. your button actually was centered (due to luck though, that would only have worked in init where you set your frame x and y to 0), but it looked off-center because of the title bar. Since you cannot place anything in the title bar, it makes sense that it is not part of your view's content, otherwise you would have to layout your view differently depending on whether the bar is there or not. The idea here is that 0,0 starts at the top of your view's editable content. My recollection of buggy resizing was actually for popover, not sheet. Sheet does resize to 576 if the view is too large, such that the title plus height exceed screen height, but it is perfectly happy to show a small view. import ui button = ui.Button(title='button') button.present(hide_title_bar=True) print('center', button.center) print('bounds', button.bounds, button.bounds.center()) print(' frame', button.frame, button.frame.center()) # hide_title_bar=False ('center', Point(512.00, 416.00)) ('bounds', Rect(0.00, 0.00, 1024.00, 704.00), Point(512.00, 352.00)) (' frame', Rect(0.00, 64.00, 1024.00, 704.00), Point(512.00, 416.00)) # hide_title_bar=True ('center', Point(512.00, 384.00)) ('bounds', Rect(0.00, 0.00, 1024.00, 768.00), Point(512.00, 384.00)) (' frame', Rect(0.00, 0.00, 1024.00, 768.00), Point(512.00, 384.00)) phuket's question was about sheet, where you do have control of the frame. In full screen, your frame gets... the full screen minus whatever title bar is presented. The values here look to be correct, and the way you would expect them to be. fullscreen has its own issues, for instance convert_point does not work properly in most orientations. Understood. My message was that printing out view.frameand view.boundshelps. Also that it is possible that view.center != view.bounds.center() != view.frame.center(). Look, I will still reflect on this. But every time I have mentioned a preflight (for the want of a better word) it's just ignored. I just see this as the most logical answer. Yes we can do all our set up after present, but it doesn't make sense, and apparently it doesn't make sense to to it before. In mind it's so simple. We have no idea what @omz display algorithm will size the eventual ui.View at. So, if we can call a function/ method with the present details, he just returns information about the size he will create without creating anything, when we end up calling present. I honestly can not see how it will be ever resolved otherwise. Again if I am wrong , I am wrong. But I feel strongly about this point A few last points. I know it works out if you use layout . But that's the only way. Unless you use layout callback, it can never work. I suspect this is the way it was crafted in the first place. I don't even need to understand Python to know I don't know what I don't know. If the end point for the view size/bounds whatever is present, then without using layout the only logical way is to position all your objects after present is called. I have never seen any sample code here like that. I am happy to do it that way if that's the way. But again, Center is only one issue. And yes, I understand I could have just misunderstood it. But I think the root problem is that the final size of your ui.View is unknown. Even if it was, next year, a new device and @omz modifies his algorithm, back to square one We have no idea what @omz display algorithm will size the eventual ui.View at. We do when ui.View.layout()is called. From the ui docs... def layout(self): # This will be called when a view is resized. You should typically set the # frames of the view's subviews here, if your layout requirements cannot # be fulfilled with the standard auto-resizing (flex) attribute. pass As I have written before, I used to avoid ui.View.layout() but now I embrace it. Those fleeting moments of sanity are a beautiful thing! @ccc , really? Look, I understand if you write anything close to important you need to use layout. For orientation etc.. Doesn't mean the rest of it should be broken. Again, I don't understand why my simple premise is not embrassed. This can all work. Just need to be able to call something like bounds = present.preflight('sheet') Again, I don't get it... It's just so logical For sheet, you know exactly what size your view will be (unless you make it too large). You can use layout() as a "preflight", or to call a preflight function, it is called whenever something is resized, use a flag if you only want it to run once, though be careful that you don't change the frame after creation. Or, use the flex attributes. For instance if you set btn.flex='lrtb' your original code would keep the button centered in all view types. If you want to make ui's that are orientation friendly, you are forced to use layout or flex anyway, anything you hard code based in some preflight will be broken anyway once you rotate the device. How would preflight know how you plan in presenting your view, and in what orientation? It would be nice if the title bar sizes for the various modes and devices was documented, though more for testing what your ui looks like in a small device. @JonB , well preflight could return a tuple for horizontal and vertical frames or bounds or whatever they are. Again, I will reiterate, we can't know what we don't know. But Pytonista knows what is going to do. We just need to be able to ask it without it doing anything other than returning the results of the sizes of what it is going to based on what we are passing to it. Apple Watch is an example. I would put money on the way to resolve this issue is a preflight method. All the code is there already. It also makes omz more device independent from the api. Here you go. I tested on iPad, but you should test this on iPhones, I believe the landscape title bar is only 32 instead of 64, as mentioned in the referenced thread. For sheet, this function returns the max size... if you exceed width or height, presenting as sheet will result in a 576 square. Maybe this function should take a desired frame as an input, returning final frame size. Also, technically, sheet doesn't work on iPhones, so perhaps that should be sensed and accounted for here. These improvements are left as an exercise to the reader :) Again, I can think of only a very few cases where this cannot be done, and done better with either flex or layout. I will admit flex can be annoying, as it takes many layered views to do anything complex. Frankly, for simple throwaways, I usually present first, the 50 msec of dark while the view sets up is not a concern. At one time I started working on a set of Layout Managers, see my uicomponents git, BoxLayout and uicontainers (which has a FlowLayout container), that would make this easier. the major stumbling block was the lack of min/max component size attributes. Now that we can set arbitrary attributes, it would be possible to implement real layout managers, that let you specify desired size, and min/max sizes of components. (edit fixed a few typos and slight mod to the code) import objc_util def get_presented_size(mode,hide_title_bar=False): ''' see''' f=objc_util.ObjCClass('UIApplication').sharedApplication().keyWindow().frame() sz= ui.Size(f.size.width,f.size.height) if sz[1]<=320: status_height=0 title_height=32 else: status_height=20 title_height=44 if mode=='sheet': maxsize=min( sz-(0,title_height*(not hide_title_bar))) return ui.Size(maxsize,maxsize) elif mode=='panel': return sz-(0,status_height+(title_height*(not hide_title_bar))) elif mode=='fullscreen': return sz-(0,(title_height*(not hide_title_bar))) @JonB , nice. I am going to name it preflight 😀😱💋 But thanks, its really nice. My mind is still doing back flips, as this conversation always confuses the hell out of me. Just when I think I have worked it out, I do something that undoes my thinking... But, the mode 'sheet' does not work correctly on an iPad Pro. If you do same as the others, return sz-(0,status_height+(title_height*(not hide_title_bar))) It works. But I am sure done for a reason the way it is so I am sure something else must be done. It would be nice if you could also pass your intended frame or (w/h) to the function so you are not getting the maximums but the real size. That will be presented. Of course this can get effected if you ask for a size not support on a device. But I don't trust my self, to tinker with code. Feel free to use or modify this, as long as you don't call it preflight :) I'll have to try this again (admittedly I didn't test sheet in this final form). For the 768x1024 devices, this should return something like 768x768 when title bar is hidden, and 704x704 when it is not. From what I can deduce, sheet present modes have to work in both orientations, so your size is limited to the narrowest dimension of your screen, minus the title bar height (if title bar is shown). Can you run my little script in and tell me what you get in both orientations on your Ipad Pro? What did this script return? I suspect my logic could be off for some of the newer iPhones as well.
https://forum.omz-software.com/topic/2490/bounds-vrs-frame-center-etc-in-ui
CC-MAIN-2018-39
refinedweb
2,466
75
Thoughts on software development James Shore has published an excellent 2 part series of articles covering agile planning through using risk management practices to make solid commitments. The articles show how to use Risk Multipliers to account for common risks such as turnover, changing requirements, work disruption and illustrates the techniques advocated through using 2 pseudo projects. Start with Part 1 that lays the foundation and paints the "all-goes-well" scenario and then continue onto Part 2 that covers the scenario I think most of us are facing on a day-to-day basis. Enjoy. Having driven the adoption of Scrum during the last few months at our company, I'm constantly looking for ways to improve on the process as we are still experiencing some growing pains. Construx (CEO: Steve McConnell of Code Complete fame) has recently published a new set of white papers covering some excellent topics. One of the white papers is titled: "10 Keys to Successful Scrum Adoption" and I found it a really excellent read with some great best practices and pointers that will help you to avoid some of the pitfalls of adopting Scrum. Go ahead and read it - requires you to register, but registration is free. I've been spending the last month or two getting the SDLC procedures and infrastructure in place for the Silverlight release of our company's Enterprise Physical Asset Management system. We were already using Team Foundation Source Control but wanted to adopt SCRUM as our development methodology and improve on some of our development practices such as using Continuous Integration, Test Driven Development, better Work Item Management and enhanced quality through Code Reviews, Check-in procedures and the selective use of Code Metrics. We have just completed our first 2 week sprint and the team felt that the processes and tooling worked quite well and enjoyed most of the new structure and discipline introduced. In our retrospective we identified some process impediments that we want to improve on going forward. In this post I will highlight some of the processes and tooling we are using. We are using Conchango's SCRUM for Team System template to assist us with Work Item Management. The Conchango template and the associated process guidance provides a good starting point. I highly recommend taking a look at the new Conchango Task Board application to make the daily management of your Product and Sprint Backlog items a breeze. The task board provides a very nice GUI for managing the essential fields of the work items as well as the state transitions. The software is currently in Beta2 and still contains one or two frustrating bugs. As mentioned in my previous post on code reviews, we are using a lightweight code review process that suites the distributed nature of our team. We do reviews after all the development tasks for a PBI (business feature) has been completed. The development tasks are only allowed to move into the Done state once the review has been completed. The review process is also quite focused as we know all checked-in code adheres to a rigorous check-in procedure. We also try and automate a lot of the standards compliance checking through the use of tools like FxCop and NDepend. We try to follow quite a rigorous process for check-in of our code into TFSC. This idea is to try and ensure a continuous level of quality for the code being committed. Here are the steps that all code needs to adhere to: As I previously mentioned we use the excellent TeamCity for our CI server. We are very happy with the results. We have integrated NUnit, NCover, FxCop and NDepend into 3 separate builds configurations: We have just started using the Microsoft Silverlight Testing framework for our client side UI tests and the idea is to integrate this into the build process for the next sprint that starts tomorrow. More on all of this at a later stage. I've previously blogged about using code reviews as an effective way to minimize defects, improve code quality and keep code more maintainable. In my new job I had to define a code review process that suited our development team as well as the Team Foundation Server toolset and the Conchango SCRUM for Team System process template we are using. This post details what process and tooling we came up with. To improve on the cost-benefit factor of the reviews we opted for a lightweight code review process. The aim was to get improved code quality but not incur the overhead of traditional meetings-based inspections. Because of the distributed nature of our development team, we are using a flavour of the e-mail pass-around review style. In a traditional email pass-around, the author or source code management system packages the code changes and sends an e-mail to the reviewers. We decided to use the Code Review Sidekick of Attrice Corporation to get a consolidated list of code changes that have been checked into TFS. We do not send an e-mail containing the changes but instead use a Code Review Sprint Backlog item to track and manage the review process. More on this in the Review Process section. We do reviews after all the development tasks required to complete a piece of business functionality have been checked in. We discussed the issue of using shelving to prevent not reviewed code from being checked in, but the team felt that as we are following a quite rigorous process for check-in we can do the review at a later stage on the checked-in changes. The developers working on completing their tasks are however not allowed to change the status of their tasks to Done. The reviewer has the responsibility of signifying the work as completed when he/she has completed a successful review. To make the review process as light as possible we also try to automate a lot of the coding standards and architectural standards through the use of tools like FxCop, ReSharper and NDepend. Although there are some CodePlex projects that provide support for customizing TFS to support code reviews and code review flows, we decided to steer away from using these solutions as they depend on creating custom work items and custom work item states. We wanted to only use the default Conchango SCRUM for Team System process template. Unfortunately the template doesn't contain a Ready for Review state for a Sprint Backlog item, but we decided to use the Ready For Test state to signify that a task is ready for review as we execute the testing as part of the check-in. As already mentioned, to manage and track the code review process and feedback, we create a special Code Review Sprint Backlog Item (SBI) associated with every Product Backlog item (PBI) during Sprint planning. The author ensures that all the development SBI's for the PBI are in the Ready for Test status and have been checked in. The author marks the Code Review SBI as Ready for Test and assigns it to the reviewer to request a review. The reviewer gets notified of the new review work item assigned to him/her and uses the Code Review Sidekick tool to get a consolidated list of changesets with their associated files for the work completed. This presents the review with a list of the code changes that were made. When the reviewer right clicks on a file to review, he/she can have up to 3 options. The options imply the following: The reviewer conducts the review and continuously updates the Work remaining on the Code Review SBI. Any questions are clarified by the author through Skype/Interwise or verbal communication. The reviewer also adds any comments/notes to the Code Review SBI itself. After completing the review, the reviewer contacts the author to discuss the issues identified. The author and reviewer agrees upon what issues needs to be addressed. Here we can have 2 possible outcomes: Finally, once the reviewer is satisfied, the reviewer changes the Code Review SBI status to Done and changes the status of the development SBI's to Done to signify that the PBI is completed.. We had a discussion in our team about standards for generated and shared code files. When talking about shared code, we are referring to sharing the same source file between different projects as in a GlobalAssemblyInfo.cs file that contains the AssemblyInfo attributes that are shared by the different projects in your solution. We ended up agreeing to the following: <Compile Include="Staff.Validation.cs"> <DependentUpon>Staff.cs</DependentUpon> </Compile> What conventions are you using to manage your generated and shared code files? Just a short post to make people, who are using Subversion, aware of the v2.0 release of AnkSVN - "the open source Subversion plugin for Visual Studio". The biggest new feature in my mind is that it has been rewritten to run as a VS SCC plugin. Screenshots are available here. I've been a happy VisualSVN user up till now but this new release may just convince me to make a switch. Great work! As the blog post says, I haven't been too active on the blogging front for the past 2-3 months. This has been mostly due to the career move I made to Pragma Products beginning of June. Having been at the company now for just more than a month I am very happy with the move. The company, people and the opportunities going forward are great. Moving from a big corporate environment into a smaller company has been a very positive move for me. It is great to be able to add immediate value to an environment! We are embarking on an exciting re-write of the company's core enterprise asset management application. We will be utilizing new and exciting technologies like Silverlight 2 and WCF so expect some future blog content on this as we get to know the technologies more intimately. The first release is for a big international customer and we will have to support multiple languages like Chinese and German so I guess we will be taking the Turkey test Currently I'm spending some time on improving the SDLC. We are using Team Foundation Server 2008 and we want to use the Conchango Scrum for Team System process template. I haven't worked directly with Scrum before, but having had some experience with the Open Unified Process and Rational Unified Process the concepts are familiar. I'll miss working with Subversion though I know about SvnBridge but to me that is fighting your tools in stead of trying to work in the way they prescribe. I'm looking forward to figuring out how to implement Continuous Integration using Team Build to see how it matches up with the process I documented using open source tools. One thing that I am still getting used to in the new environment is the fact that more than half of the team members work from home for large portions of the week. This makes it quite challenging for a newcomer to get to know everybody. We have a weekly team meeting that is held mostly at our offices and I met most of the team through these meetings. Needless to say it has taken some time for me to get to know everybody. Fortunately the planning sessions we had last week was mandatory attendance and I was able to interact a bit more with them. The team uses Skype and Interwise for video conferencing to keep in touch. After being a bit skeptical at first, the technology works quite well. I'm however still a bit cautious about the whole distributed agile development team so I might blog about my experiences of this process going forward as well. Till later... ReSharper 4.0 is finally released!! See the Features and What's New. Steve McConnell, author of the excellent Code Complete, has published an updated version of the Software Classic Mistakes he identified in his book Rapid Development published in 1996. The idea was to see what the industry is like 12 years later. Read more about it here. I've said it before and I want to say it again. If you are serious about code quality and you want to use static code analysis to help you improve on your code's quality, you just have to look at getting yourself a NDepend license. Patrick Smacchia, the developer of NDepend, has just posted another excellent article that highlights some of the major areas where NDepend can help you pinpoint problematic areas in your code base. Because of NDepend's excellent Code Query Language, you are able to write your own dynamic active code conventions that fit your environment and architecture of the system you are developing. In the article Patrick gives some excellent examples of some of the common CQL queries that you can use to prevent incorrect coupling, layering etc. and ways to ensure the correct evolution of your code. So don't delay - get yourself a license now and don't forget to fit NDepend into your continuous integration process. Use it together with FxCop and you will go a long way towards enhancing the quality of the code you are delivering. May will be my last month working as a .NET Technology Consultant at Sanlam. I have accepted an exciting position as Software Architect at Pragma Products where I will join the team beginning of June 2008 to work on the next generation of their software products. I'm super excited about the company, challenges and the people I will be working with and for the opportunity to finally work in an environment where I can concentrate on using only .NET to build great software. We went through an exercise in the past month or so of trying to select the best platform to use in our company to engage the development communities using mediums like forums/blogs and for managing the content surrounding our development methodology, coding standards and processes through using a wiki. We are currently using two different open source products - MoinMoin as a wiki and phpBB for developer related forums. These products work fine, but the idea was to see whether there are better alternatives available and whether there is not a single product offering that combines wiki/forum functionality with additional collaboration mediums like blogs to create a single source of information going forward. Having said this, the most important part of the solution for us is the wiki and as such the platform of choice decision was heavily skewed towards the wiki support provided by the solution. As the enterprise development landscape in my company is unfortunately mostly geared towards Java, I had to identify and "fight for" different Microsoft solutions that supported these requirements. The first solution that came to mind was Windows SharePoint Services (WSS) 3.0. At the time I was not allowed to investigate MOSS as the product had to be free. In addition to WSS 3.0, I also previously used two other ASP.NET wiki engines (Perspective Wiki and ScrewTurn Wiki) for managing project related content. I took these into account as well. The Wiki functionality provided by WSS 3.0 is IMHO still a bit immature. I think this has to be expected from a version 1 feature (wiki's are new to WSS 3.0) and by a platform that tries to address the complete content management picture, i.e. support for forums, blogs, projects etc. This "weakness" is also the biggest strength of WSS 3.0 as it provides a consolidated platform and experience that makes it quite easy to setup a content management solution that includes forums, blogs, projects and much, much more. The platform itself is extensible as reflected by the ongoing efforts by the community to plug some of the gaps through the Community Kit for SharePoint. I was quite impressed by the functionality provided by the Enhanced Blog Edition as it provided in my mind most of the missing pieces for a blogging platform, but unfortunately the Enhanced Wiki Edition is still in early days and still missing quite a few important pieces. The lack of good Wiki support therefore unfortunately ruled WSS out of the picture for us. Both Perspective and ScrewTurn, as dedicated wiki engines, provide better support for Wiki functionality than WSS 3.0. Perspective v1 was the first wiki engine I used 2 years back and the results were not too bad. It has a WYSIWIG editor and for simple scenarios it can be quite effective. The current v3 Alpha includes a complete overhaul of the Wiki engine with lots of new and exciting features and it seems quite promising. My only concern is that progress is very slow with the wiki being developed by a single developer. These days it seems like ScrewTurn wiki has a lot of momentum behind it as evident through having been declared the winner of Jeff Atwood's donate $5000 to .NET Open Source project. I'm using ScrewTurn wiki daily and the wiki is frequently being updated with bug fixes and new releases. There is also an extensive set of community plugins available that add additional missing functionality like AD integration etc. It unfortunately does not currently support WYSIWYG editing and for us that is a show stopper seeing that our users want to copy/paste content from Word documents and use general RTF editing capabilities. With no strong MS candidate to bring to the table, I was forced (kicking and screaming ) to accept a non MS commercial offering! We opted to buy a Confluence license and I must say that I am very happy with our choice. Confluence really provides everything you need in terms of a Wiki engine and the price tag is in my opinion quite reasonable given all the features that you get. The only thing missing from Confluence is the lack of support for forums. For this I think we'll stick with phpBB and connect the two using an include page macro. It would be interesting to know whether there are any MS technology based teams that are using anything besides WSS/MOSS/Team System for content collaboration. What other tools are in your opinion worth looking at? Jeff Atwood's post about UI-First Software Development came just about at the right time as I was looking for some tooling to assist me with a quick prototype that I had to create for a reference application we are building. For the purposes of this application we needed something more formal than paper prototypes and the tooling should be available for use in different projects within our company. My favourite UI prototyping tool is MockupScreens. It is an inexpensive tool that allows you to quickly and effectively create screen mockups using the most common windows controls with sample data and screen navigation included. You can export the screens as images or as HTML pages with screen navigation to demo to your stakeholders. There are certainly more professional and expensive prototyping tools available (GUI Design Studio comes to mind), but for simple, yet effective mockups, MockupScreens IMHO gives me the most value for money. Here is a screen shot of a mockup screen created with MockupScreens. Jeff also links to using PowerPoint 2007 as a wireframe prototyping tool. I haven't considered using PowerPoint as an option before and I was more interested in the GUUUI Web Prototyping Tools for Visio that some of his readers recommended. As Visio is already licensed within my company we would not have to buy any additional licenses. I decided to give it bash. I must say I was pleasantly surprised by the functionality provided by these templates. It allows you to draw up a site master that includes the common layout and navigation for your site. You can then use this master page as the background for your other pages very much like the master page support of ASP.NET. The sketchy interface widgets also look great and conveys the message of this being a prototype very clearly. Here is a sample master page for a book store web site: You can easily link up pages together using the hyperlink functionality of Visio and also publish the wireframe to HTML that will include the navigation via the hyperlinks. It takes a bit longer to create than MockupScreens, but the master page feature is excellent as it allows me to change the common layout in one place and have it correctly reflect in all the other pages! Excellent stuff! What other tools would you recommend for doing UI Prototyping? I've been going through some chapters of Agile Principles, Patterns and Practices in C# (which I highly recommend) and I came across a nice trick when implementing the Null Object Pattern that I wanted to share. For those new to the Null Object Pattern, I quote from Wikipedia:. Martin Fowler also refers to the Null Object Pattern as the Special Case pattern in his Catalog of Patterns of Enterprise Application Architecture. In the application I am currently building, a product catalog can be searched based on price ranges. For this I created a PriceRangeItem class that is contained within a list provided by a PriceRange class. The idea is to bind the UI against the list to provide a filter for the products based on the price range. The list needs to include an "empty choice" item that, when selected, should not affect the search results. This "empty choice" item is an ideal candidate for a Null Object. Here is my implementation: 1 public class PriceRangeItem 2 { 3 private readonly int rangeId; 4 private readonly double rangeFrom; 5 private readonly double rangeThru; 6 private readonly string rangeText; 7 8 public static readonly PriceRangeItem Null = new NullPriceRangeItem(0, double.MinValue, double.MaxValue, string.Empty); 9 10 #region Constructors + Destructors 11 12 public PriceRangeItem(int rangeId, double rangeFrom, double rangeThru, string rangeText) 13 { 14 this.rangeId = rangeId; 15 this.rangeFrom = rangeFrom; 16 this.rangeThru = rangeThru; 17 this.rangeText = rangeText; 18 } 19 20 #endregion 21 22 #region Public Members 23 24 public int RangeId 25 { 26 get { return rangeId; } 27 } 28 29 public double RangeFrom 30 { 31 get { return rangeFrom; } 32 } 33 34 public double RangeThru 35 { 36 get { return rangeThru; } 37 } 38 39 public string RangeText 40 { 41 get { return rangeText; } 42 } 43 44 #endregion 45 46 #region NullPriceRangeItem Class 47 48 private class NullPriceRangeItem : PriceRangeItem 49 { 50 public NullPriceRangeItem(int rangeId, double rangeFrom, double rangeThru, string rangeText) 51 : base(rangeId, rangeFrom, rangeThru, rangeText) 52 { 53 } 54 } 55 56 #endregion 57 } The trick is to declare NullPriceRangeItem as a nested, private class to prevent external instances of the class being created. A public static field called Null is then added to PriceRangeItem to expose the single instance of the Null Object that should exist for the application (see line 8). Other application code can now easily refer to this instance by using code like: 1 if (SelectedPriceRange != PriceRangeItem.Null) 2 { 3 ... 4 } 5 For completeness sake, here is the code for PriceRange class that shows the Null Object being added to the list of price ranges: 1 public static class PriceRange 2 { 3 private static readonly List<PriceRangeItem> list = null; 4 private const string RangeText = "{0:C} - {1:C}"; 5 6 static PriceRange() 7 { 8 list = new List<PriceRangeItem>(); 9 10 list.Add(PriceRangeItem.Null); 11 list.Add(new PriceRangeItem(1, 0, 50, string.Format(RangeText, 0, 50))); 12 list.Add(new PriceRangeItem(2, 51, 100, string.Format(RangeText, 51, 100))); 13 list.Add(new PriceRangeItem(3, 101, 250, string.Format(RangeText, 101, 250))); 14 list.Add(new PriceRangeItem(4, 251, 1000, string.Format(RangeText, 251, 1000))); 15 list.Add(new PriceRangeItem(5, 1001, 2000, string.Format(RangeText, 1001, 2000))); 16 list.Add(new PriceRangeItem(6, 2001, 10000, string.Format(RangeText, 2001, 10000))); 17 } 18 19 public static List<PriceRangeItem> All 20 { 21 get { return list; } 22 } 23 24 public static List<PriceRangeItem> InRange(double rangeFrom, double rangeThru) 25 { 26 return All.FindAll(delegate(PriceRangeItem item) 27 { 28 return item.RangeFrom >= rangeFrom && item.RangeThru <= rangeThru; 29 }); 30 } 31 } 32 Because the ranges with which the NullPriceRangeItem was created with are the minimum and maximum values for a double, you can safely enough use the list without worrying about the NullPriceRangeItem limiting the search results. Addicted to
http://dotnet.org.za/cjlotz/
crawl-002
refinedweb
4,098
59.53
There are several tools available to query and browse WMI information. These tools can be very useful in situations in which you want to access WMI information but do not want to write a script to do it. The WMI command-line tool (WMIC) is a powerful tool that can expose virtually any WMI information you want to access. It is available in Windows XP and Windows Server 2003. Unfortunately, WMIC does not run on Windows 2000, but it can still be used to query WMI on a Windows 2000 machine. WMIC maps certain WMI classes to "aliases." Aliases are used as shorthand so that you only need to type "logicaldisk" instead of "Win32_LogicalDisk". An easy way to get started with WMIC is to type the alias name of the class you are interested in. A list of all the objects that match that alias/class will be listed. wmic:root\cli>logicaldisk list brief DeviceID DriveType FreeSpace ProviderName Size VolumeName A: 2 C: 3 1540900864 4296498688 W2K D: 3 15499956224 15568003072 Z: 5 0 576038912 NR1EFRE_EN Most aliases have a list brief subcommand that will display a subset of the properties for each object. You can run similar queries for services, CPUs, processes, and so on. For a complete list of the aliases, type alias at the WMIC prompt. The creators of WMIC didn't stop with simple lists. You can also utilize WQL to do more complex queries. This next example displays all logical disks with a drivetype of 3 (local hard drive): wmic:root\cli>logicaldisk where (drivetype = '3') list brief DeviceID DriveType FreeSpace ProviderName Size VolumeName C: 3 1540806144 4296498688 W2K D: 3 15499956224 15568003072 We have just touched the surface of the capabilities of WMIC. You can invoke actions, such as creating or killing a process or service, and modify WMI data through WMIC as well. For more information, check out the Support WebCast "WMIC: A New Approach to Managing Windows Infrastructure from a Command Line," available at. Help information is also available on Windows XP and Windows Server 2003 computers by going to Start Help, and search on WMIC. Included as sample applications with the original WMI SDK, the WMI CIM Studio and WMI Object browser are web-based applications that provide much more benefit than just being example applications provided in the SDK. The following is a list of the tools and their purpose: The WMI CIM Studio is a generic WMI management tool that allows you to browse namespaces, instantiate objects, view the instances of a class, run methods, edit properties, and even perform WQL queries. The WMI Object Browser allows you to view the properties for a specific object, look at the class hierarchy, view any associations, run methods, and edit properties for an object. The WMI Event Registration allows you to create, view, and configure event consumers. The WMI Event Viewer displays events of configured event consumers. The web-based WMI tools can be obtained separately from the WMI SDK at:. The WMI SDK provides the complete WMI reference documentation along with numerous sample scripts and programs. It also includes the web-based WMI tools described in the previous section. The WMI SDK can be downloaded from the Platform SDK site located at.
http://etutorials.org/Server+Administration/Active+directory/Part+III+Scripting+Active+Directory+with+ADSI+ADO+and+WMI/Chapter+26.+Scripting+with+WMI/26.4+WMI+Tools/
CC-MAIN-2017-04
refinedweb
543
59.94
TSSslSession¶ Synopsis¶ #include <ts/ts.h> - TSSslSession TSSslSessionGet(const TSSslSessionID * sessionid)¶ - int TSSslSessionGetBuffer(const TSSslSessionID * sessionid, char * buffer, int * len_ptr)¶ - TSReturnCode TSSslSessionInsert(const TSSslSessionID * sessionid, TSSslSession addSession)¶ - TSReturnCode TSSslSessionRemove(const TSSslSessionID * sessionid)¶ Description¶ These functions work with the internal ATS session cache. These functions are only useful if the ATS internal session cache is enabled by setting proxy.config.ssl.session_cache has been set to 2. These functions tend to be used with the TS_SSL_SESSION_HOOK. The functions work with the TSSslSessionID object to identify sessions to retrieve, insert, or delete. The functions also work with the TSSslSession object which can be cast to a pointer to the openssl SSL_SESSION object. These functions perform the appropriate locking on the session cache to avoid errors. The TSSslSessionGet() and TSSslSessionGetBuffer() functions retrieve the TSSslSession object that is identified by the TSSslSessionID object. If there is no matching session object, TSSslSessionGet() returns NULL and TSSslSessionGetBuffer() returns 0. TSSslSessionGetBuffer() returns the session information serialized in a buffer that can be shared between processes. When the function is called len_ptr should point to the amount of space available in the buffer parameter. The function returns the amount of data really needed to encode the session. len_ptr is updated with the amount of data actually stored in the buffer. TSSslSessionInsert() inserts the session specified by the addSession parameter into the ATS session cache under the sessionid key. If there is already an entry in the cache for the session id key, it is first removed before the new entry is added. TSSslSessionRemove() removes the session entry from the session cache that is keyed by sessionid. TSSslTicketKeyUpdate() updates the running ATS process to use a new set of Session Ticket Encryption keys. This behaves the same way as updating the session ticket encrypt key file with new data and reloading the current ATS process. However, this API does not require writing session ticket encryption keys to disk. If both the ticket key files and TSSslTicketKeyUpdate() are used to update session ticket encryption keys, ATS will use the most recent update regardless if whether it was made by file and configuration reload or API.
https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSSslSession.en.html
CC-MAIN-2019-39
refinedweb
355
54.73
( Fi-1 whereas next is Fi #include <iostream> int main() { unsigned int tests; std::cin >> tests; while (tests––) { unsigned long long last; std::cin >> last; unsigned long long sum = 0; // first Fibonacci numbers unsigned long long a = 1; unsigned long long b = 2; // until we reach the limit while (b <= last) { // even ? if (b % 2 == 0) sum += b; // next Fibonacci number auto next = a + b; a = b; b = next; } std::cout << sum << std::endl; } return 0; } public final class p002 implements EulerSolution { public static void main(String[] args) { System.out.println(new p002().run()); } /* * Computers are fast, so we can implement this solution directly without any clever math. * Because the Fibonacci sequence grows exponentially by a factor of 1.618, the sum is * bounded above by a small multiple of 4 million. Thus the answer fits in a Java int type. */ public String run() { int sum = 0; int x = 1; // Represents the current Fibonacci number being processed int y = 2; // Represents the next Fibonacci number in the sequence while (x <= 4000000) { if (x % 2 == 0) sum += x; int z = x + y; x = y; y = z; } return Integer.toString(sum); } } def fibs_up_to(t, a=1, b=2): while a < t: yield a a, b = b, a + b fibs = fibs_up_to(4000000) sum(n for n in fibs if n % 2 == 0) #Brute force 2 - only generating even fibs (every third) def even_fibs_up_to(t): a, b = 2, 3 while a < t: yield a a, b = a + 2 * b, 2 * a + 3 * b fibs = even_fibs_up_to(4000000) sum(n for n in fibs)) Hackerrank see Above.
https://nerdprogrammer.com/project-euler-problem-2-even-fibonacci-numbers-solution/
CC-MAIN-2021-17
refinedweb
261
57.95
Archives Tools galore How many tools do you think we can use for .NET development? Well, more than 905 tools (including 292 libraries) according to what I have referenced in SharpToolbox so far! How many do you actually know? ;-) I started collecting this information more than three years ago now... It's been a while since I've taken a look at the counters. If I remember well, the latest categories that I've added are Grid computing, Workflow, Rich-client UI. The categories with the most tools are IDE - IDE-addins, Persistence - Data-tier, Reporting, Object-relational mapping, and ASP.NET. JavaToolbox is in good shape too with 558 tools including 196 libraries! Searching for a tool? Keep digging, I have most of them in store :-) Linq to Amazon implementation fore steps On Monday, I have announced an implementation of Linq for Amazon Web Services, that allows to query for books using the following syntax: var query = from book in new Amazon.BookSearch() where book.Title.Contains("ajax") && (book.Publisher == "Manning") && (book.Price <= 25) && (book.Condition == BookCondition.New) select book; Before getting to the details of the implementation code, I'd like to describe what we need to do in order to be able to use such code. First, as you can see we work with book objects. So, let's defined a Book class: public class Book { public IList<String> Authors; public BookCondition Condition; public String Publisher; public Decimal Price; public String Title; public UInt32 Year; } Here I use public fields for the sake of simplicity, but properties and private fields would be better. You can see that this class defines the members we use in our query: Title, Publisher, Price and Condition, as well as others we'll use later for display. Condition is of type BookCondition, which is just an enumeration defined like this: public enum BookCondition {All, New, Used, Refurbished, Collectible} The next and main thing we have to do is define this BookSearch class we use to perform the query. This class implements System.Query.IQueryable<T> to receive and process query expressions. IQueryable<T> is defined like this: interface IQueryable<T> : IEnumerable<T>, IQueryable which means that we have to implement the members of the following interfaces: interface IEnumerable<T> : IEnumerable { IEnumerator<T> GetEnumerator(); } interface IEnumerable { IEnumerator GetEnumerator(); } interface IQueryable : IEnumerable { Type ElementType { get; } Expression Expression { get; } IQueryable CreateQuery(Expression expression); object Execute(Expression expression); } and finally the IQueryable<T> interface itself of course: interface IQueryable<T> : IEnumerable<T>, IQueryable, IEnumerable { IQueryable<S> CreateQuery<S>(Expression expression); S Execute<S>(Expression expression); } It may look like a lot of work... I will try to describe it simply so that you can create your own implementation of IQueryable without too much difficulty once you get to know how the mechanics work. In order to be able to implement IQueryable, you need to understand what happens behind the scenes. The from..where..select query expression you write in your code is just syntactic sugar that the compiler converts quietly into something else! In fact, when you write: var query = from book in new Amazon.BookSearch() where book.Title.Contains("ajax") && (book.Publisher == "Manning") && (book.Price <= 25) && (book.Condition == BookCondition.New) select book; the compiler translates this into: IQueryable<Book> query = Queryable.Where<Book>(new BookSearch(), <expression tree>); Queryable .Where is a static method that takes as arguments an IQueryable followed by an expression tree. I can hear you crying out loud: "What the hell is an expression tree?!". Well, an expression tree is just a way to describe what you wrote after where as data instead of code. And what's the point? - To defer the execution of the query - To be able to analyze the query to do whatever we want in response And why is it called a "tree"? Because it's a hierarchy of expressions. Here is the complete expression tree in our case: Expression.Lambda<Func<Book, Boolean>>( Expression.AndAlso( Expression.AndAlso( Expression.AndAlso( Expression.CallVirtual( typeof(String).GetMethod("Contains"), Expression.Field(book, typeof(Book).GetField("Title")), new Expression[] { Expression.Constant("ajax") }), Expression.Call( typeof(String).GetMethod("op_Equality"), null, new Expression[] { Expression.Field(book, typeof(Book).GetField("Publisher")), Expression.Constant("Manning") })), Expression.Call( typeof(Decimal).GetMethod("op_LessThanOrEqual"), null, new Expression[] { Expression.Field(book, typeof(Book).GetField("Price")), Expression.Constant(new Decimal(25), typeof(Decimal)) })), Expression.EQ( Expression.Convert( Expression.Field(book, typeof(Book).GetField("Condition")), typeof(BookCondition)), Expression.Constant(BookCondition.New))), new ParameterExpression[] { book })); If you look at this tree, you should be able to locate the criteria we have specified in our query. What we will do in the next step is see how all this combines and how we will extract the information from the expression tree to be able to construct a web query to Amazon. We'll keep that for another post... Stay tuned! Cross-posted from Introducing Linq to Amazon As an example that will be included in the Linq in Action book, I've created an example that shows how Linq can be extended to query anything.. steps Cross-posted from Linq rebranding Soma announced that the naming scheme for the Linq technologies has been simplified: LINQ to ADO.NET includes:So long DLinq and XLinq... Welcome Linq to Whatever! LINQ to DataSet LINQ to Entities LINQ to SQL (formerly DLinq) LINQ support for other data types includes: LINQ to XML (formerly XLinq) LINQ to Objects I agree that this will make things a bit more explicit, especially the separation between what is related to ADO.NET and what is not. It will also help implementors find names for their products using Linq. We can expect to see more LINQ to Something in the future... Cross-posted from 20-06-2006 20:06 As pointed out by Frank Arrigo, this is a fun date. Will this post make it to the right time? ADO.NET Entity Framework documents are back (ADO.NET vNext) The original articles about the ADO.NET Entity Framework didn't stay online very long, but this time, two official documents are available: - The ADO.NET Entity Framework Overview - Next-Generation Data Access (Making the conceptual level real) In these documents, you'll encounter the following: - The Entity Data Model - Entity SQL - LINQ to Entities - LINQ to DataSets - LINQ to SQL (formerly known as DLinq) We still don't know if DLinq will continue to live for a very long time (I don't see why we'd need two object-relational mapping solutions from Microsoft except for creating confusion). Update: These documents are also available online as web pages, not just Word documents. - The ADO.NET Entity Framework Overview - Next-Generation Data Access (Making the conceptual level real) Cross-posted from Bye bye DLinq, Hello Linq for Sql and the ADO.NET Entity Framework! Some of you may have read the documents on ADO.NET 3.0 and the Entity Framework when they were published, before they were promptly removed. The big question since that time has been: What will happen to DLinq in regard to the future of ADO.NET, which seemed to offer the same services and more with a different solution? Bye bye WinFX, Hello .NET Framework 3.0! Somasegar, Corporate Vice President of Microsof's Developer Division, has announced that the WinFX brand has been killed! You should now talk about ".NET Framework 3.0" instead when you refer to the package that contains WPF (Windows Presentation Foundation - Avalon), WCF (Windows Communication Foundation - Indigo), WF (Workflow Foundation - Windows Workflow Foundation - Winoe), WCS (Windows CardSpace - InfoCard). The .NET Framework 3.0 runs on the .NET CLR 2.0. What will the framework delivered as part of the Orcas wave be called? .NET Framework 3.5? .NET Framework 4.0? And it will contain C# 3.0 and VB.NET 9.0... Clear as mud, isn't it? Microsoft's marketing likes to play funny games... Copying and pasting text with styles removed - PureText I've been doing a lot of copy&paste operations between several Word documents or HTML pages lately, and I got fed up with doing "Edit | Paste Special... | Unformatted Text" all the time to avoid having my Word documents polluted with styles I don't want. A great little free tool came to the rescue: Steve Miller's PureText. I highly recommend that you give it a try! Pasting text with formatting and styles removed is as easy as using the Apple+V shortcut, an easy replacement for CTRL+V. Oh, for those who don't know, the Apple key is the one that looks like a window that looks like a waving flag located between CTRL and ALT... Viewing the SQL generated by DLinq As demonstrated by Sahil in the last issue of his Demystifying DLinq series, you can get see the SQL DLinq would execute for a given query at debug-time using the built-in Query Visualizer: This useful little tool can also run the SQL query and show you the results if you want. In addition to the Query Visualizer, you can also get the SQL queries at runtime using the following line of code to redirect the SQL to the console: db.Log = Console.Out; Of course, you can also redirect the logs to anything else as far as it's a System.IO.TextWriter. Even better, you can also call GetQueryText(myQuery) on your DataContext instance to get the SQL at any time you want for a given query. If you have updated in-memory objects returned by DLinq, you can get the SQL that would be executed on the database by using DataContext.GetChangeText(). A difference between GetQueryText() and GetChangeText() on one side, and the Query Visualizer on the other side, is that with the latter you can see the value of the SQL query's parameters. Cross-posted from Focusing on LINQ I've recently published some posts about LINQ, and as I wrote before, you can expect to see more on this weblog in the coming months. The reason? I'm currently writing a book on LINQ! I've opened a web site dedicated to the LINQ, XLINQ, DLINQ technologies and to the book. If you visit linqinaction.net you'll be able to find useful pointers to learn more about these technologies. Hopefully, the book and the site will give you everything you need to understand LINQ.
http://weblogs.asp.net/fmarguerie/archive/2006/6
CC-MAIN-2015-06
refinedweb
1,732
55.74
Raphaël is a small JavaScript library written by Dmitry Baranovskiy of Atlassian, that allows you to create and manipulate vector graphics in your web pages. It’s amazingly simple to use and is cross-browser compatible; supporting Internet Explorer 6.0+, Safari 3.0+, Firefox 3.0+, and Opera 9.5+. Internally Raphaël uses VML in IE and SVG in the other browsers. Now, demos involving circles and squares are fine, but I wanted to create an example that demonstrated a legitimate, practical use of vector graphics. So how about real-time statistics measurement? Here’s a screenshot of my Current Sprocket Usage line graph that plots real-time “sprocket” usage levels. Best of all, it was a snap to make. The HTML is simple; we just need a heading and container to hold our canvas — a div element: <h1>Current Sprocket Usage: <span id="readout"></span></h1> <div id="graph"></div> To start we have to generate a new graphics canvas. I always like to place all my code within an object definition in order to create a separate namespace, so we’ll start with the following code: var SpGraph = { init : function(){ SpGraph.graph = Raphael("graph", 400, 200); SpGraph.graph.rect(0, 0, 390, 110, 10).attr("fill", "#000"); } } window.onload = function () { SpGraph.init(); }; Using the window.onload event we call our SpGraph.init method. Within this method we create our canvas using Raphael("graph", 400, 200). The first argument is the ID of our container element, the other two represent width and height. We store the returned canvas object in our SpGraph.graph property. With the next line we create a rectangle and set some attributes: SpGraph.graph.rect(0, 0, 390, 110, 10).attr("fill", "#000"); The rect method allows us to draw a rectangle specifying the x coordinate, y coordinate, width, height, and optionally a corner radius. Notice that we’ve also chained a call to the attr method to set the fill color. All Raphaël graphic objects support the attr method and there’s a range of attributes you can set. Raphaël supports chaining all its methods, which we will take advantage of soon. Our effort so far has resulted in this lovely black rectangle with rounded corners. Now lets add stripes! To do this we add the following loop to the SpGraph.init method: for(var x = 10; x < 110; x += 10) { var c = (x > 10) ? "#333" : "#f00"; SpGraph.graph.path({stroke: c}).moveTo(0, x).lineTo(390,x); } The loop executes 10 times drawing a line each time; a red line for the first one and a gray line for the others. The Raphaël path method initializes the path mode of drawing, returning a path object. It doesn’t actually draw anything itself; you have to use the path object methods, which are chainable. The moveTo method moves the drawing cursor to the specified x and y coordinates and the lineTo method draws a line from the cursor point to the point specified. The result is the stripey background below: So now we have to draw the actual graph line. The vertical axis (represented by the stripes) is the percentage usage level. The horizontal axis will represent time in 10 pixel increments. In the real world each update of the graph would be obtained via an Ajax call, say every 5 seconds, but here I just create random values and update the graph every second. Once again, we use the path method to draw a 5 pixel wide line. We initialise the path and store the reference to it in the SpGraph.path property like so: SpGraph.path = SpGraph.graph.path({ stroke: "#0f0", "stroke-width": 5, "fill-opacity": 0 }).moveTo(20, 110); Every update, we extend the line using the lineTo method like so: SpGraph.path.lineTo(20+SpGraph.updates*10, 110-perf); perf is a random value between 0 and 100. The SpGraph.updates property is a simple counter that allows us to control how many updates before the line is reset. The counter value is also used to plot the location of the line on the horizontal axis. After 35 updates the line is reset by removing it, using the SpGraph.path.remove method, and starting a new one. So the whole script looks like this: var SpGraph = { init : function(){ SpGraph.graph = Raphael("graph", 400, 200); SpGraph.graph.rect(0, 0, 390, 110, 10).attr("fill", "#000"); for(var x = 10; x < 110; x += 10) { var c = (x > 10) ? "#333" : "#f00"; SpGraph.graph.path({stroke: c}).moveTo(0, x).lineTo(390,x); } SpGraph.startPath(); SpGraph.updateGraph(); }, startPath : function() { if(SpGraph.path) { SpGraph.path.remove(); } SpGraph.path = SpGraph.graph.path({ stroke: "#0f0", "stroke-width": 5, "fill-opacity": 0 }).moveTo(20, 110); }, updateGraph : function() { if(SpGraph.updates++ < 36) { // imagine this value comes from an ajax request var perf = Math.floor(Math.random() * 100); SpGraph.path.lineTo(20+SpGraph.updates*10, 110-perf); document.getElementById('readout').innerHTML = perf+'%'; } else { SpGraph.updates = 0; SpGraph.startPath(); } SpGraph.timer = setTimeout("SpGraph.updateGraph();",1000); }, updates : 0 } window.onload = function () { SpGraph.init(); }; Don’t forget to see it working in the demo. OK, so maybe a sprocket usage graph isn’t exactly the legitimate, practical example I promised, but at least you got a look at what you can achieve with Raphaël with only a little effort. The documentation on the site isn’t complete, but it’s not too difficult to work out anyway. Why don’t you have a go yourself? Quick, Simple, cross-browser compatible, vector graphics on the web has never been easier. September 4th, 2008 at 10:59 am Does the data have to be written in script, or does the library have the capability to extract its data from existing HTML? If it’s pure JS there’s no fallback content for assistive devices to interpret, or for non-scrip users to see, therefore no legitimate way of using it at all (without adding that layer yourself). September 4th, 2008 at 11:09 am Well the library is purely for the creation of vector graphics, so I don’t think it’s the responsibility of this library. In the demo, I’ve updated the heading of the graph, a h1 element, with the current performance value each time the graph updates. No reason why the heading couldn’t have the initial value on page load. But, what’s the best way to update page content via Ajax and remain accessible? I guess, you could also have a manual refresh control, that grabs the latest perf data (since last update) and then produces a static graph from the data in the HTML. September 5th, 2008 at 10:44 am There’s no way to update page content via Ajax and remain accessible. Not if you’re being realistic. But then I suppose it depends on your benchmark. It depends whether you count JS support as an accessibility issue (I do; many don’t), and whether you can presume that a screenreader user is using the latest and greatest version (which may support ARIA, where older ones don’t). ARIA = Accessible Rich Internet Applications (to me, an oxymoron if ever there was one; but I’m not gonna get into that here — have a look at some of this for more: (thinks it’s cool) (not at all convinced .. actually I wrote that one, just so you know where the bias is coming from ;)) September 5th, 2008 at 10:46 am Oh I meant this one actually, but the other one is still interesting — September 10th, 2008 at 12:19 am How about a more realistic demo? This example really needs an abstraction layer, so much so that it is almost impossible to digest how good or bad this thing is as a graphics library. September 15th, 2008 at 7:07 pm I’m not sure whether this is a comment about the library or browser support for the library but… The demo works seamlessly in FireFox, Safari and Google’s Chrome, but IE8 reports invalid JavaScript unless I tell it to work in “compatibility mode” (emulating IE7). It appears IE8’s default “standards mode” is still out of step with the other browsers? :(
http://www.sitepoint.com/blogs/2008/09/03/easy-vector-graphics-with-the-raphael-javascript-library/
crawl-002
refinedweb
1,369
65.52
Environment: Python 3.7.2 macOS - Import sub1 normally from main.py Currently, I am writing a python3 program with the following directory structure and source code. ├── main.py └── modules ├── sub1.py └── sub2.py # main.py from modules import sub1 if __name__ == '__main__': sub1.sub1_func () # modules/sub1.py import sub2 def sub1_func (): sub2.sub2_func () if __name__ == '__main__': sub1_func () # modules/sub2.py def sub2_func (): print ('sub2') is main.py imports sub1.py and uses sub1_func () in sub1, sub1_func () refers to sub2.py sub2_func () in the same directory as sub1.py. If i run main.py at this time, ModuleNotFoundError: No module named 'sub2'will appear. Therefore, considering the thing seen from main.py, the sub1.py import statement was changed from import sub2to from modules import sub2The correct operation as expected was confirmed. ("Sub2" is output) However, when sub1.py is executed in this state, ModuleNotFoundError: No module named 'modules'will appear. sub1 is going to be referenced from other modules, so is there any way to import sub1 normally from main.py with import of sub1 as import sub2? . In the first place, isn't it very good to have such a file structure? If i am familiar with python, would you please teach me? - Answer # 1 - Answer # 2 The python import is referenced relative to the path when it is executed. To reference relative to the location of the file instead of the runtime path import .foo Like, you need to write explicitly. Related articles - i don't know how to import multiple csv files into mongodb with python - (python) i want to convert multiple json files with different columns into one csv format - python - i want to put multiple jsonl files in a dataframe at once - python - read multiple audio files (wav files) - python - read only the maximum value from multiple csv files and combine them into one file - python - i can't send multiple images with the line api - python - cannot inherit a class with multiple arguments - python multiple regression analysis statsmodelsformula - bash - output cron results to multiple files (one is standard error only) - python - i want to handle files with the path obtained by ospathjoin - python - how to load multiple time formats with pandas - python 3x - multiple outputs from a list of tuples - emacs - (python) i want to automate the work of reading files in order - python google drive api 100mb or more files cannot be uploaded - i want to perform multiple processes with else in python list comprehension notation - python - about multiple processing and loop processing in discordpy - processing python dat files - eliminating pandas install and import in python pyenv export ldflags - python - import the original csv dataset - python - randomly extract data from multiple data frames (no between data frames is the same) Rewrite import of sub1.py. Relative import
https://www.tutorialfor.com/questions-150749.htm
CC-MAIN-2021-10
refinedweb
465
54.42
Hello, I am taking my first basic computer programming course. We are learning how to use C# in Visual Studio. Recently we started learning about classes and how they work in OOP. I get the idea of what they are and what they do. I am having a problem writing the code that is needed for my homework assignment. Here is the assignment. A local carpenter creates desks according to customers orders. The cost of the desk is based on the following: 1. the desk length and width in inches 2. the type of wood 3. the number of drawers The price is calculated as follows: 1 Minimum charge of 250.00 2.If surface area is greater than 720 square inches, there is an additional 50.00 added. 3. The charge for different kinds of wood are as follows: a. Pine - no charge b. Mahogany - 160.00 c. Oak - 110.00 4. The basic desk has no drawers. For each drawers added, there is an additional 35.00 charge 5. There is a sales tax of 9% THE DESK CLASS 1. Create an object oriented class named Desk that has fields for the following: a. desk length b. desk width c. type of wood d. number of drawers e. cost for mahogany f. cost for oak g. surface charge h. drawer charge i. total cost of desk - Create properties for each field as required. a. the property for the total cost of the desk should be read only - Create a Method to calculate the cost of the desk - Note: a. All calculations are done by the Desk class b. No user inputs our outputs are done by the Desk class THE PROGRAM CLASS: 1. Accepts user inputs for all values required to determine the final cost of the desk. Each input must be preceded by a prompt 2. Creates and uses an object of the Desk class to a. pass the inputs to the correct properties of the object b. retrieve the total cost of the desk from the object 3. Displays the user inputs as well as the final cost of the Desk. 4. Note: a. The program class provides for all inputs/outputs of the program b. The program class does no calculations. HERE IS THE CODE I HAVE WRITTEN FOR THE PROGRAM CLASS using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Desk newDesk = new Desk(); Console.WriteLine("Please enter the length you would like the desk to be in inches."); newDesk.DeskLength = Convert.ToInt32 (Console.ReadLine()); Console.WriteLine("Please enter the width you would like the desk to be in inches."); newDesk.DeskWidth = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("What type of wood would you like? Pine, Mahogany (+$160.00), or Oak (+$110.00)."); newDesk.DeskWood =Convert.ToString( Console.ReadLine()); Console.WriteLine("There is a $30.00 charge for each drawer. How many drawers would you like?"); newDesk.DeskDrawers = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Total Cost is,{0}", newDesk.TotalCost ); Console.ReadKey(); } } } HERE IS THE CODE I HAVE WRITTEN FOR THE DESK CLASS using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { public class Desk { int deskLength; int deskWidth; string deskWood; int deskDrawers; int mahogcost = 160; int oakCost = 110; int surfaceCharge; int drawerCost; double taxAmt = 0.09; int totalCost; public int DrawerCost { get{return drawerCost;} set{drawerCost = value;} } public int DeskLength { get { return deskLength; } set { deskLength = value; CalculateAreacost(); } } public int DeskWidth { get { return deskWidth; } set { deskWidth = value; CalculateAreacost(); } } public string DeskWood { get { return deskWood; } set { deskWood = value; } } public int DeskDrawers { get { return deskDrawers; } set { deskDrawers = value; CalculateDrawer(); } } public int SurfaceCharge { get { return surfaceCharge; } set { surfaceCharge = value; } } public void CalculateDrawer() { drawerCost = 30 * DeskDrawers; } public void CalculateAreacost() { if (DeskLength * DeskWidth > 720) surfaceCharge = 50; else surfaceCharge = 0; } } } I had my friend help me with a few of the things, he is no longer available though. The main issue I am running into is 1. Creating code that takes the user input given for the wood type and assigning it to a value such as 110 for oak or 160 for mahogany. 2. How to create a method in the Desk class that adds wood type, drawer charge, surface charge, 250.00, 9% sales tax. Then being able to take the total amount and return it and all original user inputs back to main and be able to display them. Every time I believe to be making progress and write some code. I go to test it in main by plugging it into a readline. Based on user inputs it should calculate a total, however I always yield 0. The reason I posted the full assignment is to see what my specific instructions are. I don't believe I should have as many variables declared in the Desk class. But based upon the instructions, I feel I have to?
https://www.daniweb.com/programming/software-development/threads/440000/carpenters-desk-program
CC-MAIN-2016-50
refinedweb
818
68.47
Reading clojure docs is a PITA Check for example this page: First of all, it looks like some badly formatted prose, definitely not as some technical document that is supposed to convey exact definitions. But the looks is just one thing, the content is even worse: “Loads libs, skipping any that are already loaded.” Normally I wouldn’t pick on the very first sentence, but if you read the whole page, you know what you will not find? You will not find out what “load” actually is and isn’t. The very thing this function does you are already supposed to understand before you start using it. This is mind boggling . Reading further at least we can find out what a ‘lib’ is, but by that time my brain is already flooded with the following expressions that I don’t understand: libspec, prefix list, common prefix, flag that modifies, ns macro. Sure, I have some intuitive feelings about all these, but more than a decade of programming taught me to not even try to listen to intuition when reading documentation because it just misleads me. So in order to understand the first paragraph I would have to google at least those five expressions and then follow each of them down the rabbit hole. And this is just the first paragraph. Ok, I realize if I start googling before reading everything, I might start chasing explanations that are present on this page, so I reluctantly read further, writing down everything that is unexplained so far to me: named set, resources, classpath, Java package, sharing names. Also, how can a shared name locate the root directory of anything? Let’s continue: classpath-relative, path-mapping, associated namespace (sounds like the mafia). Right, so now I see that libspec, flags and prefix lists are also explained to some degree, this could’ve been an easily recognizable feature of this page with some human friendly formatting. Instead every time I want to read anything about either of these things, I have to look for the for seconds, trying to distinguish between the grey text on grey background, like I am playing some boring game made by bad A.I. I just want to know what require does. Well, now I know that require loads libs. Let’s google this. So, there is this page that is the first in the google search results, obviously way better formatted, humanly readable page. Not sure why clojuredocs exists if there is this nicely formatted documentation? Also, if there are multiple docs, if I find some differences, how to know which is the correct? If there are no differences, why the duplicated content? Reading through the docs, stuff starts to clear up, but I still don’t know what loading is. Let’s google that. This time I actually write out the question, hoping that I get some stackoverflow or similar answer, because I am starting to run out of time and I am wishing for something that would raise less questions than how many it answers. I don’t see relevant SO questions, but the first result is this page: Not sure why not some of the nice docs, but I click and find out that the load function “loads code”. For some moment I stare at the screen and contemplate if there is any reason to try to ask about this on the support channels. But because last few times I did that I got told to “RTFM” (not literally, they are nice people they talk down to you in a very eloquent way), I give up and decide that maybe another day I will try to understand this again.
https://medium.com/@johnsontabouret/reading-clojure-docs-is-a-pita-a1097cab08fb
CC-MAIN-2018-22
refinedweb
614
66.57
Python Data Structures and Algorithms: Sort a list of elements using Gnome sort Python Search and Sorting : Exercise-13 with Solution Write a Python program to sort a list of elements using Gnome sort. Gnome sort is a sorting algorithm originally proposed by Dr. Hamid Sarbazi-Azad (Professor of Computer Engineering at Sharif University of Technology) in 2000 and called "stupid sort" (not to be confused with bogosort), and then later on described by Dick Grune and named "gnome sort". The algorithm always finds the first place where two adjacent elements are in the wrong order, and swaps them. It takes advantage of the fact that performing a swap can introduce a new out-of-order adjacent pair only next to the two swapped elements. A visualization of the gnome sort: Sample Solution: Python Code: def gnome_sort(nums): if len(nums) <= 1: return nums i = 1 while i < len(nums): if nums[i-1] <= nums[i]: i += 1 else: nums[i-1], nums[i] = nums[i], nums[i-1] i -= 1 if (i == 0): i = 1 user_input = input("Input numbers separated by a comma:\n").strip() nums = [int(item) for item in user_input.split(',')] gnome_sort(nums) print(nums) Sample Output: Input numbers separated by a comma: 5, 79, 35, 68, 25 [5, 25, 35, 68, 79] Flowchart: Python Code Editor: Contribute your code and comments through Disqus. Previous: Write a Python program to sort a list of elements using Bogosort sort. Next: Write a Python program to sort a list of elements using Cocktail shaker
https://www.w3resource.com/python-exercises/data-structures-and-algorithms/python-search-and-sorting-exercise-13.php
CC-MAIN-2019-39
refinedweb
255
56.39
Red Hat Bugzilla – Bug 868447 Runtime Error Could not execute JDBC batch update at sun.reflect.NativeConstructorAccessorImpl.newInstance0:-2 Last modified: 2013-10-11 15:36:18 EDT Description of problem: I think I had just clicked on Subscribe to attach a subscription to my system (may have clicked Update though). Instead of it doing that, I got a crazy Runtime error "Runtime Error Could not execute JDBC batch update at sun.reflect.NativeConstructorAccessorImpl.newInstance0:-2" which showed up in our standard error dialog format (check rhsm.log for more info. [Error Message]). This is the first time I've seen such a message. I was registered to prod, and probably trying to subscribe to the Red Hat Partner, Red Hat Employee, or some 60 day RHEV Eval subscription. I'm not sure if/how we can see the prod logs to get more info... Here's what I got from my logs - 2012-10-19 10:54:22,001 [DEBUG] @connection.py:305 - Loading CA certificate: '/etc/rhsm/ca/candlepin-stage.pem' 2012-10-19 10:54:22,001 [DEBUG] @connection.py:305 - Loading CA certificate: '/etc/rhsm/ca/redhat-uep.pem' 2012-10-19 10:54:22,002 [DEBUG] @connection.py:344 - Making request: GET /subscription/consumers/ebec29a8-d801-42f5-b02f-927213870bf8/owner 2012-10-19 10:54:22,847 [DEBUG] @connection.py:357 - Response status: 500 2012-10-19 10:54:22,848 [ERROR] @utils.py:56 - Runtime Error Could not execute JDBC batch update at sun.reflect.NativeConstructorAccessorImpl.newInstance0:-2 Traceback (most recent call last): File "/usr/sbin/subscription-manager-gui", line 48, in ? from dbus.bus import REQUEST_NAME_REPLY_PRIMARY_OWNER ImportError: No module named bus 2012-10-19 10:54:23,581 [DEBUG] @connection.py:357 - Response status: 200 2012-10-19 10:54:23,588 [DEBUG] @connection.py:323 - Loading CA PEM certificates from: /etc/rhsm/ca/ Not really sure what else I can provide to help, let me know if you need anything. This seems like it could be tough to try and track down. If it makes any difference, my auto-heal interval was set to 1 at the time :) Version-Release number of selected component (if applicable): subscription-manager-1.1.2-1.git.83.76f3b7f How reproducible: ???? Could be a once in a blue moon thing, not sure how to reproduce If it happens again, I'll be sure to update this BZ Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:. Pretty old, and more then likely environmental at the time. Please re-open if you see it again.
https://bugzilla.redhat.com/show_bug.cgi?id=868447
CC-MAIN-2017-51
refinedweb
431
50.84
DCOP: Desktop COmmunications Protocol Preston Brown <pbrown@kde.org> October 14, 1999 Revised and extended by Matthias Ettrich <ettrich@kde.org> Mar 29, 2000 HTMLized by Hans Meine Hans Meine <rastajoe@gmx.net> May 25, 2000 Added a DCOPRef example: Tim Jansen May 12, 2003 Note that new features may not be developed for DCOP. The successor for DCOP is D-Bus The motivation behind building a protocol like DCOP is simple. For the past year, we have been attempting to enable interprocess communication between KDE applications. KDE already has an extremely simple IPC mechanism called KWMcom, which is (was!) used for communicating between the panel and the window manager for instance. It is about as simple as it gets, passing messages via X Atoms. For this reason it is limited in the size and complexity of the data that can be passed (X atoms must be small to remain efficient) and it also makes it so that X is required. CORBA was thought to be a more effective IPC/RPC solution. However, after a year of attempting to make heavy use of CORBA in KDE, we have realized that it is a bit slow and memory intensive for simple use. It also has no authentication available. What we really needed was an extremely simple protocol with basic authorization, along the lines of MIT-MAGIC-COOKIE, as used by X. It would not be able to do NEARLY what CORBA was able to do, but for the simple tasks required it would be sufficient. Some examples of such tasks might be an application sending a message to the panel saying, "I have started, stop displaying the 'application starting' wait state," or having a new application that starts query to see if any other applications of the same name are running. If they are, simply call a function on the remote application to create a new window, rather than starting a new process. DCOPalled, for you CORBA fellows) using the built-in QDataStream operators available in all of the Qt classes. This is fast and easy. In fact it's so little work that you can easily write the marshalling code by hand. In addition, there's a simple IDL-like compiler available (dcopidl and dcopidl2cpp) that generates stubs and skeletons for you. Using the dcopidl compiler has the additional benefit of type safety. This HOWTO describes the manual method first and covers the dcopidl compiler later. KApplication has gained a method called "KApplication::dcopClient()" which returns a pointer to a DCOPClient instance. The first time this method is called, the client class will be created. DCOPClients have unique identifiers attached to them which are based on what KApplication::name() returns. In fact, if there is only a single instance of the program running, the appId will be equal to KApplication::name(). To actually enable DCOP communication to begin, you must use DCOPClient::attach(). This will attempt to attach to the DCOP server. If no server is found or there is any other type of error, attach() will return false. KApplication will catch a dcop signal and display an appropriate error message box in that case. After connecting with the server via DCOPClient::attach(), you need to register this appId with the server so it knows about you. Otherwise, you are communicating anonymously. Use the DCOPClient::registerAs(const QCString &name) to do so. In the simple case: /* * returns the appId that is actually registered, which _may_ be * different from what you passed */ appId = client -> registerAs (kApp -> name()); If you never retrieve the DCOPClient pointer from KApplication, the object will not be created and thus there will be no memory overhead. You may also detach from the server by calling DCOPClient::detach(). If you wish to attach again you will need to re-register as well. If you only wish to change the ID under which you are registered, simply call DCOPClient::registerAs() with the new name. KUniqueApplication automatically registers itself to DCOP. If you are using KUniqueApplication you should not attach or register yourself, this is already done. The appId is by definition equal to kapp->name(). You can retrieve the registered DCOP client by calling kapp->dcopClient(). To actually communicate, you have one of two choices. You may either call the "send" or the "call" method. Both methods require three identification parameters: an application identifier, a remote object, a remote function. Sending is asynchronous (i.e. it returns immediately) and may or may not result in your own application being sent a message at some point in the future. Then "send" requires one and "call" requires two data parameters. The remote object must be specified as an object hierarchy. That is, if the toplevel object is called "fooObject" and has the child "barObject", you would reference this object as "fooObject/barObject". Functions must be described by a full function signature. If the remote function is called "doIt", and it takes an int, it would be described as "doIt(int)". Please note that the return type is not specified here, as it is not part of the function signature (or at least the C++ understanding of a function signature). You will get the return type of a function back as an extra parameter to DCOPClient::call(). See the section on call() for more details. In order to actually get the data to the remote client, it must be "serialized" via a QDataStream operating on a QByteArray. This is how the data parameter is "built". A few examples will make clear how this works. Say you want to call "doIt" as described above, and not block (or wait for a response). You will not receive the return value of the remotely called function, but you will not hang while the RPC is processed either. The return value of send() indicates whether DCOP communication succeeded or not. QByteArray data; QDataStream arg(data, IO_WriteOnly); arg << 5; if (!client -> send ("someAppId", "fooObject/barObject", "doIt(int)", data)) qDebug ("there was some error using DCOP."); OK, now let's say we wanted to get the data back from the remotely called function. You have to execute a call() instead of a send(). The returned value will then be available in the data parameter "reply". The actual return value of call() is still whether or not DCOP communication was successful. QByteArray data, replyData; QCString replyType; QDataStream arg (data, IO_WriteOnly); arg << 5; if (!client -> call ("someAppId", "fooObject/barObject", "doIt(int)", data, replyType, replyData)) qDebug("there was some error using DCOP."); else { QDataStream reply (replyData, IO_ReadOnly); if (replyType == "QString") { QString result; reply >> result; print ("the result is: %s", result. latin1()); } else qDebug ("doIt returned an unexpected type of reply!"); } N.B.: You cannot call() a method belonging to an application which has registered with an unique numeric id appended to its textual name (see dcopclient.h for more info). In this case, DCOP would not know which application it should connect with to call the method. This is not an issue with send(), as you can broadcast to all applications that have registered with appname-<numeric_id> by using a wildcard (e.g. 'konsole-*'), which will send your signal to all applications called 'konsole'. Since KDE 3.1 there is an even easier way to make a DCOP call: DCOPRef. Then you only need to create a DCOPRef to the object and as long as the function does not use unusual argument types, calling the function is as easy as this: DCOPRef barObject ("someAppId", "fooObject/barObject"); DCOPReply reply = barObject. call ("doIt", 5); if (!reply. isValid()) qDebug("there was some error using DCOP."); else { print("the result is: %s", ((QString)reply). latin1()); } Currently the only real way to receive data from DCOP is to multiply inherit from the normal class that you are inheriting (usually some sort of QWidget subclass or QObject) as well as the DCOPObject class. DCOPObject provides one very important method: DCOPObject::process(). This is a pure virtual method that you must implement in order to process DCOP messages that you receive. It takes a function signature, QByteArray of parameters, and a reference to a QByteArray for the reply data that you must fill in. Think of DCOPObject::process() as a sort of dispatch agent. In the future, there will probably be a precompiler for your sources to write this method for you. However, until that point you need to examine the incoming function signature and take action accordingly. Here is an example implementation. bool BarObject:: process (const QCString &fun, const QByteArray &data, QCString &replyType, QByteArray &replyData) { if (fun == "doIt(int)") { QDataStream arg (data, IO_ReadOnly); int i; // parameter arg >> i; QString result = self -> doIt (i); QDataStream reply (replyData, IO_WriteOnly); reply << result; replyType = "QString"; return true; } else { qDebug ("unknown function call to BarObject::process()"); return false; } } If your applications is able to process incoming function calls right away the above code is all you need. When your application needs to do more complex tasks you might want to do the processing out of 'process' function call and send the result back later when it becomes available. For this you can ask your DCOPClient for a transactionId. You can then return from the 'process' function and when the result is available finish the transaction. In the mean time your application can receive incoming DCOP function calls from other clients. Such code could like this: bool BarObject::process(const QCString &fun, const QByteArray &data, QCString &, QByteArray &) { if (fun == "doIt(int)") { QDataStream arg (data, IO_ReadOnly); int i; // parameter arg >> i; QString result = self -> doIt(i); DCOPClientTransaction *myTransaction; myTransaction = kapp -> dcopClient() -> beginTransaction(); // start processing... // Calls slotProcessingDone when finished. startProcessing (myTransaction, i); return true; } else { qDebug("unknown function call to BarObject::process()"); return false; } } slotProcessingDone(DCOPClientTransaction *myTransaction, const QString &result) { QCString replyType = "QString"; QByteArray replyData; QDataStream reply(replyData, IO_WriteOnly); reply << result; kapp -> dcopClient() -> endTransaction(myTransaction, replyType, replyData); } Sometimes a component wants to send notifications via DCOP to other components but does not know which components will be interested in these notifications. One could use a broadcast in such a case but this is a very crude method. For a more sophisticated method DCOP signals have been invented. DCOP signals are very similair to Qt signals, there are some differences though. A DCOP signal can be connected to a DCOP function. Whenever the DCOP signal gets emitted, the DCOP functions to which the signal is connected are being called. DCOP signals are, just like Qt signals, one way. They do not provide a return value. A DCOP signal originates from a DCOP Object/DCOP Client combination (sender). It can be connected to a function of another DCOP Object/DCOP Client combination (receiver). There are two major differences between connections of Qt signals and connections of DCOP signals. In DCOP, unlike Qt, a signal connections can have an anonymous sender and, unlike Qt, a DCOP signal connection can be non-volatile. With DCOP one can connect a signal without specifying the sending DCOP Object or DCOP Client. In that case signals from any DCOP Object and/or DCOP Client will be delivered. This allows the specification of certain events without tying oneself to a certain object that implementes the events. Another DCOP feature are so called non-volatile connections. With Qt signal connections, the connection gets deleted when either sender or receiver of the signal gets deleted. A volatile DCOP signal connection will behave the same. However, a non-volatile DCOP signal connection will not get deleted when the sending object gets deleted. Once a new object gets created with the same name as the original sending object, the connection will be restored. There is no difference between the two when the receiving object gets deleted, in that case the signal connection will always be deleted. A receiver can create a non-volatile connection while the sender doesn't (yet) exist. An anonymous DCOP connection should always be non-volatile. The following example shows how KLauncher emits a signal whenever it notices that an application that was started via KLauncher terminates. QByteArray params; QDataStream stream (params, IO_WriteOnly); stream << pid; kapp -> dcopClient() -> emitDCOPSignal ("clientDied(pid_t)", params); The task manager of the KDE panel connects to this signal. It uses an anonymous connection (it doesn't require that the signal is being emitted by KLauncher) that is non-volatile: connectDCOPSignal (0, 0, "clientDied(pid_t)", "clientDied(pid_t)", false); It connects the clientDied(pid_t) signal to its own clientDied(pid_t) DCOP function. In this case the signal and the function to call have the same name. This isn't needed as long as the arguments of both signal and receiving function match. The receiving function may ignore one or more of the trailing arguments of the signal. E.g. it is allowed to connect the clientDied(pid_t) signal to a clientDied(void) DCOP function. dcopidl makes setting up a DCOP server easy. Instead of having to implement the process() method and unmarshalling (retrieving from QByteArray) parameters manually, you can let dcopidl create the necessary code on your behalf. This also allows you to describe the interface for your class in a single, separate header file. Writing an IDL file is very similar to writing a normal C++ header. An exception is the keyword 'ASYNC'. It indicates that a call to this function shall be processed asynchronously. For the C++ compiler, it expands to 'void'. #ifndef MY_INTERFACE_H #define MY_INTERFACE_H #include <dcopobject.h> class MyInterface : virtual public DCOPObject { K_DCOP k_dcop: virtual ASYNC myAsynchronousMethod (QString someParameter) = 0; virtual QRect mySynchronousMethod() = 0; }; #endif As you can see, you're essentially declaring an abstract base class, which virtually inherits from DCOPObject. If you're using the standard KDE build scripts, then you can simply add this file (which you would call MyInterface.h) to your sources directory. Then you edit your Makefile.am, adding 'MyInterface.skel' to your SOURCES list and MyInterface.h to include_HEADERS. The build scripts will use dcopidl to parse MyInterface.h, converting it to an XML description in MyInterface.kidl. Next, a file called MyInterface_skel.cpp will automatically be created, compiled and linked with your binary. The next thing you have to do is to choose which of your classes will implement the interface described in MyInterface.h. Alter the inheritance of this class such that it virtually inherits from MyInterface. Then add declarations to your class interface similar to those on MyInterface.h, but virtual, not pure virtual. class MyClass: public QObject, virtual public MyInterface { Q_OBJECT public: MyClass(); ~MyClass(); ASYNC myAsynchronousMethod(QString someParameter); QRect mySynchronousMethod(); }; Note: (Qt issue) Remember that if you are inheriting from QObject, you must place it first in the list of inherited classes. In the implementation of your class' ctor, you must explicitly initialize those classes from which you are inheriting from. This is, of course, good practise, but it is essential here as you need to tell DCOPObject the name of the interface which your are implementing. MyClass::MyClass() : QObject(), DCOPObject("MyInterface") { // whatever... } Now you can simply implement the methods you have declared in your interface, exactly the same as you would normally. void MyClass::myAsynchronousMethod(QString someParameter) { qDebug("myAsyncMethod called with param `" + someParameter + "'"); } It is not necessary (though very clean) to define an interface as an abstract class of its own, like we did in the example above. We could just as well have defined a k_dcop section directly within MyClass: class MyClass: public QObject, virtual public DCOPObject { Q_OBJECT K_DCOP public: MyClass(); ~MyClass(); k_dcop: ASYNC myAsynchronousMethod(QString someParameter); QRect mySynchronousMethod(); }; In addition to skeletons, dcopidl2cpp also generate stubs. Those make it easy to call a DCOP interface without doing the marshalling manually. To use a stub, add MyInterface.stub to the SOURCES list of your Makefile.am. The stub class will then be called MyInterface_stub. Sometimes it might be interesting to use DCOP between processes belonging to different users, e.g. a frontend process running with the user's id, and a backend process running as root. To do this, two steps have to be taken: For the first step, you simply pass the server address (as found in .DCOPserver) to the second process. For the authentication, you can use the ICEAUTHORITY environment variable to tell the second process where to find the authentication information. (Note that this implies that the second process is able to read the authentication file, so it will probably only work if the second process runs as root. If it should run as another user, a similar approach to what kdesu does with xauth must be taken. In fact, it would be a very good idea to add DCOP support to kdesu!) For example ICEAUTHORITY=~user/.ICEauthority kdesu root -c kcmroot -dcopserver `cat ~user/.DCOPserver` will, after kdesu got the root password, execute kcmroot as root, talking to the user's dcop server. NOTE: DCOP communication is not encrypted, so please do not pass important information around this way. A few back-of-the-napkin tests folks: Code: #include <kapp.h> int main (int argc, char **argv) { KApplication *app; app = new KApplication (argc, argv, "testit"); return app -> exec(); } Compiled with: g++ -O2 -o testit testit.cpp -I$QTDIR/include -L$QTDIR/lib -lkdecore on Linux yields the following memory use statistics: VmSize: 8076 kB VmLck: 0 kB VmRSS: 4532 kB VmData: 208 kB VmStk: 20 kB VmExe: 4 kB VmLib: 6588 kB If I create the KApplication's DCOPClient, and call attach() and registerAs(), it changes to this: VmSize: 8080 kB VmLck: 0 kB VmRSS: 4624 kB VmData: 208 kB VmStk: 20 kB VmExe: 4 kB VmLib: 6588 kB Basically it appears that using DCOP causes 100k more memory to be resident, but no more data or stack. So this will be shared between all processes, right? 100k to enable DCOP in all apps doesn't seem bad at all. :) OK now for some timings. Just creating a KApplication and then exiting (i.e. removing the call to KApplication::exec) takes this much time: 0.28user 0.02system 0:00.32elapsed 92%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (1084major+62minor)pagefaults 0swaps I.e. about 1/3 of a second on my PII-233. Now, if we create our DCOP object and attach to the server, it takes this long: 0.27user 0.03system 0:00.34elapsed 87%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (1107major+65minor)pagefaults 0swaps I.e. about 1/3 of a second. Basically DCOPClient creation and attaching gets lost in the statistical variation ("noise"). I was getting times between .32 and .48 over several runs for both of the example programs, so obviously system load is more relevant than the extra two calls to DCOPClient::attach and DCOPClient::registerAs, as well as the actual DCOPClient constructor time. Hopefully this document will get you well on your way into the world of inter-process communication with KDE! Please direct all comments and/or suggestions to Preston Brown <pbrown@kde.org> and Matthias Ettrich <ettrich@kde.org>.
http://techbase.kde.org/index.php?title=Development/Architecture/DCOP&diff=60952&oldid=51400
CC-MAIN-2013-20
refinedweb
3,141
55.64
Bandits appeared in the city! One of them is trying to catch as many citizens as he can. The city consists of 𝑛 squares connected by 𝑛 𝑎𝑖 citizens on the 𝑖 𝑛 — the number of squares in the city (2≤𝑛≤2⋅105). The second line contains 𝑛−1 integers 𝑝2,𝑝3…𝑝𝑛 meaning that there is a one-way road from the square 𝑝𝑖 to the square 𝑖 (1≤𝑝𝑖<𝑖). The third line contains 𝑛 integers 𝑎1,𝑎2,…,𝑎𝑛 — the number of citizens on each square initially (0≤𝑎𝑖. 题意: 你在根节点,每秒你走一个格子。你到了叶子就抓住叶子上所有人。 每个点都有一些人,每一秒钟每个人向下走一个点,最终只能走到叶子。 你和其他人轮流走,你希望抓到最多人,其他人希望抓到尽量少的人。 思路: 对于每个点上的人,要使得你抓到的尽可能少,那么就要使得每个叶子上的人分配的尽量平均。第一步就是将所有叶子加到不超过最大值,如果还有剩余的人,那就平均分配。 则定义 d p [ i ] dp[i] dp[i]代表第 i i i个节点出发的最大值, L e f t [ i ] Left[i] Left[i]为第 i i i个节点对应所有叶子加到最大值所需要的人数。 #include<cstdio> #include<cmath> #include<algorithm> #include<vector> using namespace std; typedef long long ll; const int maxn = 2e5 + 7; vector<int>G[maxn]; int a[maxn]; ll dp[maxn],Left[maxn],siz[maxn]; void DP(int x) { if(G[x].size() == 0) { dp[x] = a[x]; siz[x] = 1; return ; } ll mx = 0; for(int i = 0;i < G[x].size();i++) { int v = G[x][i]; DP(v); siz[x] += siz[v]; mx = max(mx,dp[v]); } ll num = a[x]; for(int i = 0;i < G[x].size();i++) { int v = G[x][i]; num -= Left[v] + siz[v] * (mx - dp[v]); } if(num > 0) mx += (num + siz[x] - 1) / siz[x]; for(int i = 0;i < G[x].size();i++) { int v = G[x][i]; Left[x] += Left[v] + siz[v] * (mx - dp[v]); } Left[x] -= a[x]; dp[x] = mx; } int main() { int n;scanf("%d",&n); for(int i = 2;i <= n;i++) { int x;scanf("%d",&x); G[x].push_back(i); } for(int i = 1;i <= n;i++) { scanf("%d",&a[i]); } DP(1); // for(int i = 1;i <= n;i++) { // printf("%lld %lld\n",dp[i],Left[i]); // } // printf("\n"); printf("%lld\n",dp[1]); return 0; }
https://blog.csdn.net/tomjobs/article/details/109369755
CC-MAIN-2020-50
refinedweb
339
76.96
Phase One 645 Af Users Manual Capture 4 Guide Phase-One-645-Af-User-Guide-778392 phase-one-645-af-user-guide-778392 645 AF to the manual 36aa9dd4-792e-4fba-aa5c-2ac1051f0293 645 AF - User Guide 645AF_UG_EN Free User Guide for Phase One Camera, Manual 2015-02-06 : Phase-One Phase-One-645-Af-Users-Manual-518342 phase-one-645-af-users-manual-518342 phase-one pdf Open the PDF directly: View PDF . Page Count: 106 [warning: Documents this large are best viewed by clicking the View PDF Link!] User Guide Phase One Camera 2 On rights Ver. 1.11 - Updated 18 August 2008 Learn more about Capture One 4 on Learn more about Phase One 645 AF on Cover and back images Photo by: Torben Esker Capture One and Phase One are either registered trademarks or trademarks of Phase One A/S in the European Union and/or other countries. All other trademarks are the property of their respective owners. 3 Contents 1.0 Introduction 4 1.1 Open Platform – Freedom of Choice 4 1.2 warranty 5 1.3 Recommended hardware 5 1.4 Installing and Activation of software 6 1.5 Deactivation of Capture One 4 8 1.6 Screen calibration 9 2.0 The Body - the system 10 2.1 Unpacking the system 10 2.2 Batteries for camera 12 2.3 Batteries for the back 13 2.4 The parts of the camera system 14 2.5 Attach and remove lens 15 2.6 Attaching the back 16 2.7 The display 17 2.8 The buttons 18 2.9 LED lights 18 2.10 Setting diopter 19 2.11 Adjusting the Strap 19 2.12 Eyepiece shutter 20 2.13 Setting date and time 20 3.0 Basic functions 22 3.1 Setting ISO 22 3.2 Easy Photography 23 3.3 Measuring light – Exposure Metering 25 3.4 Focus modes 27 3.5 Using focus lock and infrared focusing 32 3.6 Shutter release modes 34 3.8 Flash photography 39 3.9 ashcompensationsettings 42 4.0 Advanced functions 43 4.1 Exposure Compensation 43 4.2 AE Lock 44 4.3 Auto Bracketing 46 4.4 Taking photos with the mirror up 48 4.5 Long exposure - Bulb Mode 49 4.6 Camera display light 49 4.7 Front/rear dial lock mechanisms 50 4.8 Depthofeld 51 4.9 Infrared photography 52 5.0 Tethered shooting 54 5.1 Connecting 54 5.2 Driver set-up 54 5.3 Tethered operations 55 6.0 The Back 56 6.1 CF card usage 57 6.2 Mounting and dismounting card on computers 60 6.3 Navigating the Back menu 62 6.4 Playmode 65 6.5 Playmode – zoom functions 66 6.6 Menu Mode 68 7.0 Custom function 80 7.1 Setting custom functions 80 7.2 Types of custom functions 81 Custom Functions overview 85 8.0 Lenses and Multi Mount 86 8.1 Functions of the Phase One lens 86 8.2 Function of the Phase One lens adaptor 87 8.3 List of alternative lenses 88 8.4 Lens Cast 89 8.5 4simplestepstocalibrateonxedlenses(MAC) 90 8.6 Largeformatandstitchedimages(MAC) 90 8.7 4simplestepstocalibrateonxedlenses(PC) 91 8.8 Largeformatandstitchedimages(PC) 92 9.0 Software 94 9.1 Getting started 94 9.2 Importing from CF card 95 10.0 Large format and technical cameras 96 10.1 Large format photography 96 10.2 Technical cameras 97 11.0 Maintenance 98 11.1 changing the focusing screen 98 11.2 Battery socket 99 11.3 Tripod/Electronic shutter release contact 99 11.4 Cameradisplayerror-notication 100 11.5 Lens maintenance 101 11.6 Back Maintenance 101 11.7 housingspecication 102 11.8 P+seriesTechnicalspecications 103 11.9 End User support Policy 104 4 1.0 Introduction 1.1 Open Platform – Freedom of Choice Thank you for choosing the Phase One 645 Camera. The Phase One 645 Camera provides you the most powerful digital camera solution whether you are working portable in the eld, or tethered in a studio. When shooting portrait, landscape, fashion, wedding, product or architectural photography you will always nd a solution from Phase One that ts your needs. The camera system gives you the absolute best solution when it comes to image quality and workow thanks to the ongoing research and development made by the Phase One through more than 20 years. Phase One is committed to not only provide the best digital solution for the professional photographer, but also to ensure the photographer freedom of choice regarding lenses, bodies, back, software, and accessories. The Capture One raw workow software for Mac OS X and Windows™ is the new generation 4 software, with tethered shooting as your option. The P+ Series of backs are legendary in the photographic business, used by world class photographers for years. The system comes in a suitcase and is ready to be used right out of the box. We sincerely hope you will enjoy working with this new and innovative camera platform. 5 1.2 warranty Please read the enclosed warranty certicate. Should any problem occur, please contact the place of purchase, your local dealer for consultancy. – Do not try to repair the camera yourself, unauthorized attempt for repairing will termite the warranty. 1.3 Recommended hardware Capture One 4 may run on older computers, but Phase One recommends following the minimum requirement to ensure the best result from Capture One 4. Apple® Macintosh®: G4, G5 or Intelbased Macs 768MB RAM 1GB of free hard disk space Calibrated color monitor with 1280x800, 24-bit resolution Mac OS X 10.4.11 or Mac OS X 10.5 Microsoft® Windows®: Intel® Pentium® 4 or equivalent 768MB RAM 1GB free hard disk space Calibrated color monitor in 1280x800, 24-bit resolution Windows XP®, Service Pack 2 or higher Windows Vista® Microsoft® .NET Framework 3.0 Redistributable package – In case you do not already have this installed, Capture One will initiate installation of this. We would recommend upgrading your computer in the areas below if you work with high pixel-count cameras or simply want to optimize performance: Use processors with multiple cores, e.g. Intel Core™ DUO or better. Having 2GB RAM or more. Plenty of hard disk space for your images. 6 1.4 Installing and Activation of software You can only install Capture One 4 when your computer is connected to the internet. unless you choose to install DB only. Install on Mac OS X: Capture One software includes an easy-to-use installer that will install all the software necessary to run the application on Mac OS X. To install the software follow the procedure below: 1. Either load the Capture One DVD, or download the application from the Phase One website:. 2. Open the Capture One disk image 3. Read and accept the license agreement presented 4. Drag the Capture One icon to the Applications folder 5. Open Capture One from your Applications folder Install on Windows: Capture One 4.1 includes an easy-to-use installer that will install all the software you need to run the application on a Windows based computer. To install the software follow the procedure below: 1. Either load the Capture One DVD, or download the application from the Phase One website:. 2. Run the executable software install le. 3. Read and accept the license agreement presented 4. Follow the on-screen instructions to complete the installation. - In case you do not already have Microsoft® .NET Framework 3.0 installed, Capture One will initiate installation of this. 7 To activate Capture One 4 you normally need to be connected to the internet. But installing as Digital Back Only does not need internet connection. Open the license activation dialogue via the menu Capture One>License. Your rst step is towards activating Capture One is by opening the license activation dialogue in the application as illustrated. Enter your License code and personal details In the license activation dialogue, type in the license code provided with your purchase of Capture One. You received the License code either by email or with the original software package. Type in the personal details that you want to register along with your software activation. Once you have entered the information press the “Activate License” button and your activation will be validated by Phase One’s activation server. Your software is now activated and ready for use! Troubleshooting If you are experiencing problems activating the software, follow the instructions provided in the application, read the software manual enclosed or visit our website for inspiration and troubleshooting: http:// 8 1.5 Deactivation of Capture One 4 To deactivate Capture One 4 from a computer you need to be connected to the internet. Open the license dialogue via the menu Capture One>License. Press the Deactivate button. Once you deactivate Capture One, the application will return to trial mode. If the trial period for the computer has expired, all current and pending processing will be cancelled, and you will not be able to continue working with the application until you reactivate it. Conrm that you want to perform the deactivation. After doing so, you can activate Capture One on another computer. 9 1.6 Screen calibration Your monitor is key-element in your daily workow. One thing that assists your ability of viewing the captures you have made is by using color neutral light. Consider your monitor the new digital lightbox. To ensure accuracy, monitors need to be hardware calibrated for accuracy. A quality monitor and calibration tool provides you with a guarantee that what you are seeing on screen is correct. Once a monitor has been calibrated, the color and brightness controls should be locked to prevent inadvertent changes. Hardware-based monitor calibrators are now available at reasonable prices. The process is simple, quick and enables images to be judged with certainty. Higher level monitors have internal calibrating software that works with professional calibration devices for ultimate accuracy. 10 2.0 The Body - the system The Phase One Camera system is created to provide as much exibility and openness as possible. Phase One have for years been producing the 2 lines, Classic and Value Added, below here you can see the content of the 2 different kits. 2.1 Unpacking the system The Phase One 645AF system is delivered in a case created for the travelling photographer, the waterproof and impact resistant case has the standard measurements of carry-on baggage in airplanes. Open the case by pressing and pull-back the latches on the front/ opening. Classic: The case is exible inside, created for you to decide the actual content and interior of the case. But as delivered the case will hold: • Phase One 645 AF body with • P+ Digital Back • Phase One 80mm f 2.8 Lens • Waterproof exible case in carry-on size Pouch 1 • 4.5 meter FireWire cable • Digital back duo-battery charger • Digital back battery • Capture One raw workow software • Battery charger power supply • International outlet adaptors • Protection caps body, lens and back Startbox • USB key with User Guide, technical documents and more • Quick Guide • Warranty Brochure 11 Value Added: Case • Phase One 645 AF body with • P+ Digital Back • Phase One 80mm f 2.8 Lens with lens hood and cap • Waterproof exible case in carry-on size with room for laptop computer • CF card installed • 4.5 meter FireWire cable • QP reference grey card • Lens cast calibration card • Capture One raw workow software • Sensor Cleaning Kit Startbox • USB key with User Guide, technical documents and more • Quick Guide • Warranty Brochure Pouch 1 • Digital back battery charger • Two digital back batteries Pouch 2 • Battery charger power supply • International outlet adaptors • CF card reader • CF card reader cable • Camera power module Accessories Box • Phase One 645 AF-HB multi-mount • Protection caps body, lens and back • Lens cleaning cloth 12 2.2 Batteries for camera Set the shutter release mode selector lever to “L” (to turn the power off). Use six “AA” alkaline. NiCD batteries should only be used in the camerabody if CF07 is set on rechargeable. 1. Lift the battery case lock lever, turn it counter clockwise and pull out the battery holder. 2. Insert fresh batteries with the + and - ends as shown in the drawing. 3. Return the battery holder to its case and lock it by turning the lever clockwise. Make sure it is rmly attached. - Be sure the batteries are placed with proper polarity Checking the Battery Power Set the shutter release mode selector lever to “S” (to turn the power on). Check the battery condition in the lower right corner of the top LCD display. When replacing the batteries, be sure to use six new batteries of the same type. Do not mix different types of batteries or old batteries with new ones. NEVER throw out batteries. C S L M.UP The batteries are sufficiently charged. There is little power remaining. Have new batteries on hand. Camera will still operate. There is very little power remaining. Camera will not operate. Set the shutter release mode selector lever to “L” (to turn the power off) and replace the batteries with new ones. When the batteries are emptied for power, “batt” flashes on the main LCD and the viewfinder’s LCD when the shutter release button is pressed. 13 2.3 Batteries for the back When the system is unpacked the rst thing to do, is to give the batteries a full charge. In the Value Added Suitcase comes with two 7.2 volt Lithium-Ion batteries. Only one battery is used in the P+ back at a time, but it is recommended to charge both batteries fully before you start. While charging the batteries, you can still use the camera back if you connect it to the IEEE1394/FireWire port on your computer, by using the 6pin FireWire. The charger can adapt to voltages within a range of 110 to 250 volts. It comes with an international set of source outlet adaptors (placed in the suitcase utility compartment), please select one that ts your outlet, and mount it by sliding it in from the top. Connect the unit to the outlet and charge the batteries (approximately 2,5 to 3 hours). NEVER throw out batteries, when a battery does not work, deliver the battery for appropriate disposal. Purchasing extra batteries The Phase One P+ back comes with two 2500mAh batteries. If you need to purchase extra batteries Phase One recommend Canon BP 915 2500 mAh. Due to difference in the tolerances of some third party batteries, these may not t into the digital back’s battery compartment. Do not try to force a battery into the compartment. When pressing the battery release button it should slide in without problems. Warning! •Only use the Charger to charge the specified batteries •Do not allow charger to get wet or get exposed to moisture •Keep the Charger out of reach of children •Once charging is completed, unplug the transformer from power source •Only use the original mains adaptor 12V DC or car lead •Never apply excessive force when connecting or disconnecting a battery or contact plate. •Keep all contacts clean. •Do not force down any of the contacts. •Do not short-circuit the contacts. •Never store the battery connected to the charger for an extensive period of time. •Do not expose to excessive heat or naked flame. •Do not dismantle or carry out any alteration to the product 14 2.4 The parts of the camera system Self timer button Main LCD backlight button Multiple exposure button Focus point selector button Set button Flash auto adjustment selectbutton Auto bracketing button Multiple exposure mode button Auto exposure lock button Digital back LCD panel Play button Menu button White Balance button ISO button Digital back ON/OFF button Digital back external connection 645 AF Diopter adjustment dial Digital back relase lock Digital back relase button Cover for CF card slot Syncro terminal Electronic shutter release contact Lens release button Focus mode selector lever Strap lug Hot shoe Exposure mode dial lock/release button Exposure mode dial AF assist infrared light Mirror Electronic contacts Lens mount alignment mark Depth of eld preview button AF lock button Drive dial Shutter release button Front dial Strap mount Rear dial Main (top) LCD Eyepiece shutter lever Diopter adjustment lens LCD screen Digital back power on/off Digital back power on/off External power socket Battery case lock lever Battery case Tripod socket Multiconctor external back control Flash sync ONLY for use on technical cameras or pther platforms Flash sync Remote camera control 15 2.5 Attach and remove lens 1. Remove the front body cap, just like you would remove a lens, by pushing the lens release button backward and then turn the front body cap or the lens itself counter clockwise and lift out. 2. Align the white alignment dot of the lens [A](on the shiny ange) with the camera’s white dot[B], t the lens into the camera and rotate it clockwise until it clicks into place. To remove the front lens cap, squeeze the shiny sections together and lift out. To remove rear lens cap turn it counterclockwise. Removing While sliding the lens release button back, rotate the lens counter clockwise until it stops and lift it off. After removing the lens from the camera body, protect both ends by attaching the caps. Oil, dust, ngerprints or water on the electronic contacts could result in malfunction or corrosion. Wipe such impurities off with a clean piece of cloth. Do not touch the distance ring or other rotating parts when attaching the lens. - When installing a lens, do not press the lens release button. MF 80mm 1:28 AF 22 22 11 11 44 ft m A B A 16 2.6 Attaching the back The P+ back is fully integrated with the camera body and is a part of the whole camera system. When no cassette is attached to the Phase One 645AF camera house the mirror is up and the shutter is open. This is the correct position when no back is attached. When attaching the P+ back to the camera body the shutter will close and the mirror comes down. It is important to ensure that the bottom part of the P+ back is pressed well into the locking mechanism on the camera back before the upper locking mechanism is pressed together. Failure to do this can cause an error with the camera body. The error is a state of continuously opening and closing the shutter. If this occurs, remove the P+ back. Please be aware that the shutter should be in the correct starting position (shutter open), if this is not the case, attach and remove the P+ back again to make sure that the camera body gets in the correct starting position. 17 2.7 The display The display on the camera housing will provide valuable information on shutterspeed/aperture value also you nd information on exposure program, compensations see the drawing for explanation; the most relevant information regarding the capture can be read on the bottom display in the viewer along with the auto-focus mark indicating that the focus is in place. The display on the back is a multifunctional display, the menus changes depending on the status and choices you make. Besides providing navigation, the display on the back can work as preview screen. Exposure metering mode mark Superimposing mode (data) Superimposing mode (index) Auto bracketing mode mark Self timer mode mark Superimposing mode (date) Program mode mark Program shift indicator Shutter speed (second)/Month and date AE lock mode mark Aperture/Year Multiple exposure mode mark Exposure compensation mode mark Flash compensation mark Custom function mode mark User function mode mark Battery power indicator AF area mark Exposure compensation value AE lock indicator Defocus indicators Auto bracketing mode mark Flash auto adjustment mode mark Multiple exposure mode mark Aperture Flash charge indicator Exposure compensation value / Dierence between metered and set exposure values Exposure metering mode display Focus marks: Displayed when subject is in focus Caution mark Exposure mode mark Shutter speed 18 2.8 The buttons The back is equipped with four buttons, these buttons will take you through all functions of the back, and the buttons will change function to match the menu shown on the display. Read more on the menus in the chapter regarding this. 2.9 LED lights When the camera is powered up you will see a short blink in the green and red LED’s in the right hand side of the display and you will hear a ready beep. The lights will turn off immediately. This is an indication that the camera is ready to capture. Green: When capturing an image the green LED is blinking rapidly to indicates that the P+ back is busy. Steady green light indicates that the backlight of the display is dimmed but the camera is still ready to shoot. (The time before this happens can be set in the P+ back and is described later under “Menu mode”) RED: If the red LED is on this indicates that the P+ back is writing to the storage media thereof the buffer is not emptied. The red LED indicator located just beside the CF-cardslot under the cover in the left side is assigned to only indicate CF card activity. When the red CF-slot LED is on do not remove the card from the card slot! This can damage the formatting of the card, resulting images or data might be lost or corrupted. • • 19 2.10 Setting diopter Look through the viewnder and make sure that the focus frame (Rectangle with Circle) is in sharp focus. If it is not, turn the diopter adjustment dial in the “–” direction if you are nearsighted, in the “+” direction if you are farsighted. If this is not sufcient you may require an optional diopter correction lens. See below. Point the camera at a bright, plain object such as a white wall when making this adjustment. Replacing the Diopter Correction Lens If there is dirt or dust on the lens surface, remove it with a blower or sweep it off gently with a lens brush. If there are ngerprints or dirt on the lens surface, wipe them off with a piece of clean, soft gauze. Using solvents could discolor the diopter correction lens frame. 1. Remove the lens supplied with the nder by pulling it downward. 2. Push the replacement diopter correction lens upward into the viewnder’s eyepiece frame until it clicks into place. 2.11 Adjusting the Strap Put the neck strap through the mounts and secure it to the buckle as illustrated. After attaching the strap, pull it and make sure it does not loosen at the buckle. Diopter not matching Diopter matching 20 2.12 Eyepiece shutter Close the eyepiece shutter when there is a strong light source behind the camera or when pressing the shutter release button without looking through the viewnder. (This prevents exposure error due to light entering from the viewnder.) Turn the eyepiece shutter lever in the direction of the arrow. 2.13 Setting date and time Date and time is set and controlled through the digital back. Default date and time is GMT+1. If the digital back has been without power for a longer period of time, it will automatically ask you to setup time and date when it is powered up. In the “Time & Date” menu_20<< 21 22 3.0 Basic functions 3.1 Setting ISO ISO functionality is controlled by the back. The default ISO setting is ISO 50 or 100 depending on the back of the Camera system. A rule of thumb is that the higher ISO you are using, the higher is the degree of noise in the image, though Capture One has a powerful noise reduction. Depending on the back the Phase One ISO scale is currently 50, 100, 200, 400, 800 OR 100, 200, 400, 800, 1600 using the button on the top left, when in the main menu on the back scroll up and down and press “enter” and the desired ISO is chosen, OR if using tethered mode use the Capture panel in the Capture One application. ISO and White Balance When the display is in its home position the two buttons to the left, ISO and WB brings you directly to the ISO and White balance settings, where you can scroll up and down, and select the setting you want with the “Enter” button. Also White Balance can be controlled by Capture One if you are working tethered. 23 3.2 Easy Photography 1. Set the shutter release mode selector lever to “S” (single-frame advance mode). There are two shutter release modes: “S” (singleframe advance mode) and “C” (continuous advance mode). When set to “L,” the power is turned off. 2. Set the focus mode selector lever to “S” (single focus mode). There are three focus modes: “S” (single focus mode), “C” (continuous focus mode) and “M” (manual focus mode). 3. Set the exposure mode selector dial to “P” (program auto exposure). There are four exposure modes: “P” (program AE), “Av” (aperture priority AE) “Tv” (shutter priority AE) “M” (manual mode). C S L M.UP Focus Mode Focusing S Single focus mode Half-press the shutter release button to focus. When the focus mark lights, the focus is fixed and the shutter can be released. C Continuous focus mode The camera keeps focusing continuously while the shutter release button is half-pressed. The shutter can be released regardless of whether or not the focus mark is lit. M Manual focus mode Focus manually. 24 P: Program AE - The aperture and shutter speed are determined automatically according to the shooting conditions. This mode is best suited for general photography, since it allows you to concentrate on the shooting. You can change the shutter speed and aperture by turning the front and rear dials while the “P” (Program AE) mode is selected. Av: Aperture priority AE - Set the desired aperture and the camera selects the correct shutter speed. Use this mode to control depth of eld. Tv: Shutter priority AE - Set the desired shutter speed and the camera selects the correct aperture. Use this mode to stop motion. M: Manual mode - Set this mode when you want to use special combinations of the aperture and shutter speed. 4. Exposure metering mode is automatically set to average/spot exposure metering before exposure metering is performed. There are three exposure metering modes: In the “A” mode the average brightness in the entire frame is measured with emphasis on the center of the frame. The brightness at a specic spot in the center of the frame is metered in the “S” mode. The “A-S” mode automatically switches between these two modes depending on the contrasts in the picture. ) NOTE: When a polarizing filter is used, ensure that a circular polarizing filter(C-PL) is used. The correct exposure cannot be obtained with a normal(linear) polarizing filter (PL). P Av Tv M X CF EL 25 3.3 Measuring light – Exposure Metering 1. Exposure mode mark is displayed when the exposure mode button A is pressed. Since three different exposure modes are displayed sequentially when either the front or rear dial is turned, select an appropriate exposure mode. 2. Press the SET button or exposure metering mode button A to enter the setting. Exposure Warnings With an inappropriate exposure setting, when shooting subjects that are too light or dark, the user is warned by the ashing external LCD or the LCD inside the viewnder. At such times, the correct exposure cannot be obtained. Warnings that the exposure is outside the metering range • Program AE (P) The shutter speed and f-number blink. • Aperture priority AE (Av) The shutter speed blinks. • Shutter priority AE (Tv) The f-number blinks. • Manual mode (M) The exposure metering value difference is displayed. Average/spot auto exposure metering Exposure metering is performed after automatically selecting average/spot exposure metering.• Depending on the subject conditions, center-weighted average/spot exposure metering is selected automatically, and the correct exposure is measured. • Spot exposure metering is automatically selected when the brightness of the spot exposure metering range becomes darker than the brightness of the entire screen. • If there is very little difference between the spot exposure metering value and center-weighted average exposure metering value, the correct exposure level is obtained as the intermediate value. Center-weighted average/spot exposuremetering The average brightness of the entire screen is measured, emphasizing the center of the screen. Center spot exposure metering The brightness of an area equivalent to 7.6% at screen center is measured, and the exposure is determined. The circle at screen center serves as a general guideline. This mode is suited to measuring subjects with strong contrasts or measuring only screen portions. NOTE: you can change the amount of time the metered value is shown by entering Custom Settings C-04. SET AEL P Av Tv M X CF B A 26 1. When exposure compensation button A is pressed, [+/-] appears on the external LCD. When the front or rear dial is turned counterclock. After taking pictures using the exposure compensation feature, be sure to return the exposure compensation dial to the “0” position. Exposure compensation is also possible during AE lock. The shutter speed changes with exposure compensation in manual mode (“M”). Display of the exposure compensation of the viewnder LCD - Without usage of Metz Flash. NOTE: 1. The width of the exposure compensation step can be changed. Custom settings 01 2. The maximum exposure compensation step can be changed to ±5EV. Custom settings C-05 . Exposure Mode Exposure Compensation display P Program AE The set value is displayed Av Aperture priority AE Tv Shutter Priority AE M Manual mode The difference between the metered value and the set exposure value is displayed X Synchro mode Not displayed SET AEL P Av Tv M X CF 27 3.4 Focus modes If autofocus AF is desired, chose AF on the focusing selector ring on the lens, then chose between S(single) and C(continuously) focusing. The Focus selection ring on the lens will help you to rapidly switch between AF and M, without having to change your grip of the camera. The shutter release button has a two-step action. When pressed lightly it stops at a certain point. In this manual this position is called the “half- press” position. When you “half-press” this button, the camera functions are activated. When the shutter button is pressed further down, the shutter is tripped. This position is called the “full-press” position. When you “half-press” this button, camera functions are activated. 1. Aim the camera so that the subject is within the focus frame. 2. Half-press the shutter release button, and focus will be adjusted automatically. When the focus mark lights, the picture is in focus. 3. When lights, press the shutter release button further down to release the shutter. Out of focus Marks Flashing: The picture is not focused and the shutter cannot be released. Either press the shutter release button again to adjust the focus or move the camera to change the position of the focus frame. While the camera is operated in the auto focus mode, lenses not equipped with the focus mode selector ring turn their focusing rings automatically to focus. Do not touch the focus ring. Lenses with the focus mode selector When a lens with the focus mode selector is attached and the focus 28. Single focus mode (S) This mode uses the focus-priority mechanism. The shutter can be released when the focus mark in the viewnder is lit. This mode is suited for still subjects. Focus is locked when the focus mark lights in the viewnder’s LCD. The shutter cannot be released if the subject is not in focus (if the focus mark does not light). To take another photo with a different composition, take your nger off the shutter release button then re-press the shutter release button again. Continuous focus mode (C) In this mode shutter release has priority to focusing. The shutter can be released regardless of whether the focus mark in the viewnder’s LCD is lit. Focus is adjusted continuously while the shutter release button is half-pressed. This mode is suited for moving subjects. Focus is not locked even if the focus mark is lit. The shutter can be released even if the focus mark is not lit. 29 Focus Areas You can select the focus area that best suits the kind of pictures you intend to take. The selected focus area can be checked on the external LCD panel. Normal focus area Position the subject within frame in the focus fame in the viewnder. If there are multiple objects in the focus frame located at various distances, the camera will focus the nearest object. Spot focus area The camera focuses at the center of the mark in the focus frame [O] in the viewnder. Manual Focus Mode (M) The auto focus function can be cancelled, and you can focus manually. 1. Switch to “M” (manual focus mode). Turn the focus mode selector lever and set it to “M” (manual focus mode). [MF]Appears on the top until the subject is in focus. When it is in focus, the focusmark lights on the viewnder LCD. Notice: You can select whether or not to display the focus mark and the out of focus direction mark. Custom settings C-18. Focus point selection mark Left AF area Right AF area Center AF area X M Tv Av P 22 25 2.25 0.7 0.8 ft m 11 4 4 CF A 30 Manual focusing 1. Switch to “M” (manual focus mode). Turn the focus mode selector lever and set it to “M” (manual focus mode). Appears on the external A until the subject is in focus. When your motive is in focus the focus mark lights in the viewnder LCD. - When a lens with the focus mode selector is attached and the focus. 31 Manual focusing using the focus mark (Focus conrmation method) With this camera, the focus mark lights in the viewnder’s LCD when the picture is in focus. With the shutter release button half-pressed, turn the lens focusing ring to focus on the subject. When the subject is in focus, the focus mark lights in the viewnder’s LCD. If is lit in the viewnder’s LCD, the camera is focused on a point behind the object. If is lit, the camera is focused on a point in front of the object. - Use the focus mark when taking photos in manual focus mode or using the M645 manual lens. - If you adjust focus using the focus mark with an M645 lens, make sure to open the aperture. You can use this function with a lens of f/5.6 aperture or higher. - You can set the camera so that only the focus mark is displayed. Custom settings C-18 When Auto Focus is Failed The auto focus function requires contrast on subject. Auto focusing may fail to achieve focus with certain subjects described below. In such cases, either switch to the manual focus mode and focus manually or focus an object at the same distance as the object you want to photograph, lock the focus using the focus lock mechanism, then take a picture. • Low-contrast subject (blue skies, white walls and other objects) • Two or more objects overlapping at different distances within the focus frame (animals in cages, etc.) • Subjects with continuous repeated patterns (building exteriors, blinds, etc.) • Extremely backlit reective subjects (car bodies, water surfaces, etc.) • Or when the subject is far smaller than the focus frame in focus turn focus ring clockwise turn focus ring counter clockwise 32 3.5 Using focus lock and infrared focusing Using the Focus Lock Function If the object that you want to focus on is not in the focus frame, the camera focuses on the background at the center. In such cases use the focus lock function to lock the focus before releasing the shutter. 1. Set the focus mode selector lever to “S” or “C.” Put the subject in the focus frame and halfpress the shutter release button. 2. Lock the focus. When the focus mark in the viewnder LCD is lit, press the AF lock button on the front of the camera to lock the focus. 3. Adjust the composition. With the shutter release button half-pressed, slide the camera to achieve the desired composition, and release the shutter. When the focus mode is set at “S” (single focus mode) and the focus mark is lit, hold the shutter release button halfway down to lock the focus. NOTE Assignment of the AEL and AFL buttons can be swapped. Custom settings C-15. - You can set the camera so that when the AFL button is pressed, AF is activated and AF lock is performed Custom settings C-19. m 33 AF Assist Infrared Light When the subject is dark or very low-key and the camera can fail to auto-focus, a red lamp may light on the front of the camera when the shutter release button is half-pressed. This light assists the camera’s auto focus function. Notice: The AF assist infrared light is emitted only when the focus mode is set to “S” (single focus mode). Effective range of the AF assist infrared light is limited. It does not reach distant subjects. - Range: 9m/29.5 ft. (using 80 mm f/2.8 lens) When using a lens hood or a bellows lens hood (sold as an optional accessory) that may interfere the assist light, set focus before mounting the hood. The AF assist infrared light can be disabled. Custom settings C-26. 34 3.6 Shutter release modes Single-Frame Mode The lm is advanced one frame each time the shutter is released. Set the shutter release mode selector to “S” Continuous Mode Photos are taken as long as the shutter release button is pressed. Set the shutter release mode selector lever to “C”. Photos are taken continuously at a rate depending on the buffer speed of the back mounted on the camera. Mirror up mode When the shutter button is half-pressed, the mirror moves up, and when the shutter button is pressed again, the shutter is tripped, and a picture is taken. For the mirror up shooting procedure. Self-Timer Mode In this mode, the shutter will be released 10 seconds after the shutter release button is pressed. Turn the shutter release mode selector lever to the position. When the shutter release is pressed, the self timer lamp will blink for 7 seconds. Then, it will blink more rapidly for 3 seconds and the camera releases the shutter. For instructions about the self timer function. C S L M.UP C S L M.UP C S L M.UP X M Tv Av P 22 0.8 ft m CF 35 3.7 Exposure Modes (P) Program AE The aperture and shutter speed are determined automatically for the optimum exposure, according to the existing ambient light. This mode is best suited for general photography, allowing the user freedom to concentrate on the subject. Hold down the [program] button and turn the exposure mode setting dial to “P” (program AE) position. Program Shift (PH/PL) You can change the shutter speed and aperture value by turning the front and rear dials in the “P” (Program AE) mode. In order to avoid blurred images (due to shaking while releasing the shutter), or to open the aperture, change to “PH” (high speed). For slower shutter speeds and wider depth of eld, change to “PL” (low speed). This function allows you to make these changes quickly. NOTICE: If a correct exposure cannot be obtained, the shutter speed and aperture value blink. In such cases, the pictures can be taken but they may out too bright or too dark If the shutter speed and aperture values blink on the main LCD and in the viewfinder display when the program line is shifted, the proper exposure cannot be achieved. Please select a different Program mode. When the Program line is shifted, the aperture value changes along with the shutter speed to maintain the proper exposure. You can choose either aperture or shutter-speed to give priority in program line shift. Custom settings C-14. Increment of the aperture and shutter speed can be set at either 1/3 or 1/2- stop. Custom settings C-01. SET AEL P Av Tv M X CF A SET AEL P Av Tv M X CF 5 4 3 2 1 0 -1 -2 -3 -4 30 15 8 4 2 1 1/2 1/4 1/8 1/15 1 /60 1/250 1/1000 1/4000 1/30 1/125 1/500 1/ 2000 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 EV Shutter speed (ISO100/AF80mm F2.8 D) F 32 22 16 11 8 5.6 4 2.8 Phase One 645 program shift chart Normal Program shift area 36 Aperture Priority AE (Av) Set the desired aperture, and the camera selects the optimum shutter speed accordingly. Use the Av mode to maintain specic control over depth of eld, i.e. taking portraits or landscapes. 1. Hold down the button and turn the exposure mode setting dial to “Av” (aperture-priority AE) position. 2. Turn the front or rear dial to set the desired aperture. Shutter Priority AE (Tv) Set the desired shutter speed and the camera selects the optimum aperture accordingly. Fast shutter speed can be used to freeze motion, and slow shutter speed can be used to blur motion on purpose. 1. Hold down the button and turn the exposure mode setting dial to “Av” (aperture-priority AE) position. 2. Turn the front or rear dial to set the desired shutter speed. NOTICE: The shutter speed value will blink when the subject is too dark or too bright for a correct exposure. To obtain the correct aperture, adjust the aperture value until the shutter speed value stops blinking and remains lit. When the exposure is compensated with the rear dial, the aperture can be set with the front dial only. Increment of the aperture can be set at either 1/3 or 1/2-stop. Custom settings C-01. Rotation direction of the dials to change the values can be altered. Custom settings C-13. The selected aperture level can be locked. Av Tv M X P CF SET AEL A CF M X Tv Av P SET AEL NOTICE: The aperture value will blink when the subject is too dark or too bright for a correct exposure. To obtain the correct aperture, adjust the shutter speed value until the aperture value stops blinking and remains lit. When the exposure is compensated with the rear dial , the shutter speed can be set with the front dial only. Increment of the shutter speed can be set at either 1/3 or 1/2-stop. Custom settings C-01 Rotation direction of the dials to change the values can be altered. Custom settings C-13. The selected shutter speed can be locked. Tv M X Av P CF SET AEL A Tv M X Av P CF SET AEL 37 Manual Mode (M) This mode is used to set both the aperture and shutter speed for total exposure control. Shutter speeds can be selected from B (bulb), 30 seconds to 1/4000 of a second. Aperture values can be set from the open to the minimum aperture. B (bulb) can also be specied in this mode. 1. Hold down the button and turn the exposure mode setting dial to “M” (Manual) position. 2. Turn the rear dial to set the desired aperture. 3. Turn the front dial to set the desired shutter speed. 4. When the shutter release button is halfpressed, the difference between the present settings and the metered value is displayed in the viewnder’s LCD panel. The value is displayed in 1/3 stop increments within a range of ±6 EV. CF M X Tv Av P SET AEL A NOTICE: When the exposure is compensated in the Manual mode, the difference between the metered value and the compensated value will be displayed on the viewfinder LCD. In the B (Bulb) mode, the difference with the metered value is not displayed. Increment of the aperture and shutter speed value can be set at either 1/3 or 1/2-stop. Custom settings C-01. The assignments of the front and rear dials can be swapped. Custom settings C-11. Rotation direction of the dials to change the values can be altered. Custom settings C-13. The selected aperture and shutter speed can be locked. CF M X Tv Av P SET AEL Notice: When the set value matches with the metered value, the difference indicator will show “0.0”. When the difference between the set value and the metered value is greater than ±6EV and the set value is lower the metered value, the indicator in the viewfinder LCD shows “– u –.” Contrarily when the set value is higher than the metered value, the indicator shows “– o –.” 38 One-push shift function When difference between the set value and metered value is displayed on the viewnder LCD in the Manual “M” mode, press the AEL button for approx. 1 second and the camera will automatically adjust the shutter speed to achieve the correct exposure based on the set aperture value. While the difference [B] between the set value [A] and the metered value is displayed on the viewnder LCD, press the AEL button [C] for approximately one second. The camera changes the shutter speed to an appropriate level. X Mode (X) Select this mode when you use a ash. The shutter speed will be xed at 1/125 second for synchronization. CF M X Tv Av P SET AEL C Notice: The aperture level can be selected for the parameter to shift. Custom settings C-20. Notice: The selected aperture value can be locked. The synchronizing speed can be changed. Custom settings C-23. When you take a photograph with TTL light metering with a Metz flash. X M Tv Av P CF SET AEL A B 39 3.8 Flash photography Phase One 645 AF is equipped with a horizontal local-plane metal shutter; this makes it unnecessary for the user to acquire lenses equipped with central shutters, though it still is possible to use these lenses optically. The focal-plane shutter provides higher shutter speeds, compared to central shutter lenses, which allow you to freeze a fast moving target by using very high shutter speeds. When using a focal-plane shutter it is not possible to achieve ash synchronization faster than 1/125sec, as the 2 shutter blades at e.g.1/500 are moving parallel creating a small slit allowing a small fraction of the light to enter the sensor area of the digital back. This shutter method allows for shutter speeds of up to 1/4000 sec. A central shutter will make it possible to achieve slightly higher shutter and ash sync speeds, but central shutters but will not be able to achieve high shutter speed. 1. To use a grip type ashgun or a strobe with other electric contacts than X contact, connect the sync. cord to the camera’s sync. terminal. (See note below about ashes designed exclusively for other camera makes.) 2. While pressing the unlock button, turn the exposure mode setting dial and set it to “X” (1/125 sec.) or “M” (manual). When “M” (manual) is selected, turn the front dial and set the shutter speed to 1/125 sec. or slower. 3. Turn the rear dial to set the aperture, and then take the picture. In addition to its standard ash sync system, the Phase One 645 AF features TTL (through the lens), off the lm (OTF), electronic ash exposure metering. NOTICE: This camera’s synchro contact is an X contact. Using flashes designed exclusively for other makers of cameras may damage the camera’s internal mechanisms if connected to the camera’s hot-shoe. In this situation, use an off-camera flash bracket and connect a sync. cord to the camera’s synchro terminal. When using flashes with a flash duration of 1/500 sec. or longer, set the shutter speed to 1/30 sec. or less. 40 A ash sensor located inside the camera body reads the ash reected off the lm surface at the moment of exposure. The sensor is connected via the Phase One 645 AF s dedicated hot-shoe to a shoe- or handlemount style Metz ash unit via the Metz SCA 3952 TTL Adapter. Maximum ash sync speed is 1/125 sec., making daytime synchronization possible. The ISO of the ash is automatically set through the TTL connection from the camera’s Film Magazine; any adjustment to this is instantly recognized after the setting is locked and the shutter release is half- pressed. Also, when Film Magazines with different ISO settings are switched on the camera body, the TTL ash connection instantly recognizes the change. To utilize the TTL ash feature with all TTL-operable Metz ash units, a Metz SCA 3952 Module is required. Please see the chart below for compatibility and/or additional adapters that may be necessary. The resulting ash exposure automation determines correct ash exposure and automatically adjusts the output of the ash. It also automatically corrects for exposure compensation normally required when using lters, close-up bellows or extension tubes. However, as with all TTL systems, it requires manual compensation for differences in lm surface reection characteristics. The amount of compensation is determined by experimentation and is performed on the Mamiya Film Magazine ISO setting. 1. Mount the SCA3952 adapter onto the Metz ash, insert fully into the camera’s hot shoe, and then tighten with the locking knob. 2. Set the exposure mode, and then check the shutter speed and aperture. Type of flash SCA3952 Module Converter Metz 44 MZ-2 shoe-mount x Metz 54 Mz-3 shoe-mount x Metz 45 CL-3 & 4 Handle-mount x SCA 3045 Metz 60 CT-4 Handle-mount x SCA 3000 Metz 70 Mz-5 &4 Handle-mount x Adapter Metz Flash Unit A For more info on Metz, contact the local Metzdealer or 41 Example: (1) When the size of the subject you want to light with the ash is relatively small within the picture (2) When the background behind the subject is extremely bright or when there is a strongly reective object in the background (3) When the background behind the subject is extremely dark (outdoors at night, etc.) (4) For ash photography with a narrow lm latitude 1. While in the P or Av modes, the camera can be set to release the shutter at the metered value, even the background behind the subject is dark. Custom settings C-24. 2. The sync. speed in the X mode can be set between 1/40 and 1/125 seconds. Custom settings C-23. * When the shutter speed is set to 1/2 increments, the sync. speed can be set between 1/45 and 1/125 seconds. Rear Curtain Syncro When a moving subject has been shot under this function, the ash of light appears after the moving subject. Rear curtain sync mode Front curtain sync mode This function is set by Custom function setting. Custom setting C-27. Exposure mode Shutter speed Aperture P Program AE Automatically set by camera to 1/60 sec. when the metered shutter speed is 1/60 or slower, and 1/125 when it is 1/125 sec. or faster. Automatically set by camera Av Aperture priority AE Any aperture Tv Shutter priority AE Automatically set by camera to 1/125 when the set shutter speed is 1/125 sec. or faster. Automatically set by camera M Manual mode Any aperture X Synchro mode 1/125 sec. Any aperture NOTICE: With TTL flash photography, the reflection of the flash is metered and the intensity of the flash is adjusted automatically, so TTL flash photography may not be able to suit to all conditions. In the cases described below, we recommend that you use a flashmeter to check the intensity of the flash or to use a manual flash setting. 42 3.9 flash compensation settings By combined use of a Metz ash and the SCA3952 adapter, the camera adjusts for ash. It can be adjusted within ±3EV in increments of 1/3 steps. 1. Turn on the power Install the SCA3952 adapter on the Metz ash, and put it on the camera then lock the ash in place using the locking knob on the ash shoe. Turn the shutter release mode selector lever to the “S” or “C” position, and turn ON the ash power switch. 2. When the ash charge conrmation lamp lights, press the set button A in. The is displayed on the main LCD panel. 3. Turn the front or rear dial to select the ash compensation value. External LCD Panel (normal display) 4. When the shutter button is half-pressed, the display appears on the external LCD, and appears on the LCD inside the viewnder with a + compensation, or appears with a – compensation. Viewfinder LCD readouts - If the ash-charge mark is not displayed, the ash compensation button [A] can not be used. - Keep pressing the set button to activate the ash compensation mode. You can check the exposure compensation value. - If you turn the shutter release mode selector lever to the “L” (power OFF) position, the compensation value will be canceled. Exposure compensation and ash compensation can be linked. Custom settings C-25. SET AEL P Av Tv M X CF A SET AEL P Av Tv M X CF 43 4.0 Advanced functions 4.1 pictures. Please keep in mind; you can do quite a lot of work using the High Dynamic Range Tool in Capture One 4. With the exposure compensation dial 1. When exposure compensation button A is pressed, a ppears on the external LCD. When the front or rear dial is turned counter- clock. NOTICE: After taking pictures using the exposure compensation feature, be sure to return the exposure compensation dial to the “0” position. The exposure compensation dial is locked at the “0” and positions. The exposure compensation feature is available during AE locked operation. The width of the exposure compensation step can be changed. Custom settings C-01. The maximum amount of the compensation can be set either at ±3 or ±5. Custom settings C-05. Exposure mode Exposure compensation display PProgram AE The set value is displayed Av Aperture Value Priority Tv Time Value Priority MManual Mode The difference between the metered value and the set Exposure value is displayed XSync Mode Not displayed 44 4.2 AE Lock Shooting with the AE lock function is useful in cases where the subject to be brought into focus differs from the subject whose exposure is to be measured or when measuring the exposure of a particular part to be brought into focus using spot exposure metering mode while that part is on the shooting screen. The AEL button will lock the Auto-exposure value as the photo is being recomposed. 1. Turn the shutter release mode selector lever to “S” or “C.” 2. Turn the exposure mode setting dial and select any of “P,” “Av,” or “T”. 3. Focus on the subject for metering exposure, and press the AEL button on the rear of the grip. [ ] Will appear on the viewnder LCD, indicating that the exposure value is locked. 4. Slide the camera to recompose the shot, and take the picture. SET AEL P Av Tv M X CF A NOTICE: [ ] in the viewfinder LCD blinks to indicate the exposure is locked, when you continue to take the next picture in the AE lock mode. NOTICE: If you turn the shutter release mode selector lever to the “L” (power OFF) position, or after elapse of one hour, the AE lock mode will automatically be cancelled. NOTICE: In the Manual “M” exposure mode, you cannot use the AE lock function. When the difference between the metered value and the set value is displayed, press the AEL button for approximately one second, and one-push shift function will be activated and the camera will automatically adjust the shutter speed. 45 Metered-value difference indicator Keep pressing the AEL button and the difference between the metered exposure value and the exposure of the new composition will be displayed on the viewnder LCD. This function can be used to see if an object of very different brightness levels can be properly photographed. If the difference between the set value and the metered value exceeds 6EV, the viewnder LCD blinks “– u –” for underexposure and “– o – ” for overexposure. By turning the front or rear dial in the AE lock mode, you can change the aperture and shutter speed value without changing the exposure value that is set when entered into AE lock mode. In the “P” mode (Program AE) mode, turning either the front or rear dial shifts the program to “PH” and “PL.” When in “Av” (Aperture-priority AE) or “Tv” Shutterpriority AE), turning one of the dials changes both the aperture and shutter speed values. NOTICE: The way to cancel the AE lock can be changed. Custom settings C-17. Half-pressing of he shutter release button can activate the AE lock mode. Custom settings C-16. The assignment of the AEL button and AFL button can be swapped by using Custom settings C-15. Exposure compensation and auto- bracketing function can be used when the camera is in the AE lock mode in normal operation or with the mirror locked up. 46 4.3 Auto Bracketing With auto exposure bracketing, you can capture different exposure variations automatically for three or two successive frames, when it is difcult to determine an exposure compensation value. The number of frames to be taken, the bracketed shooting sequence, bracketing margin and other settings can be selected as desired for shooting in auto bracketing mode. 1. Turn the shutter release mode selector lever to the “S” or “C” position. When set at the “S” position, you can shoot one frame with each press of the shutter release button. In the “C” mode, the camera takes three (or two) frames successively with one press of the shutter release button. 2. Keep pressing the auto-bracketing button for approximately one second, the auto bracketing mark will blink on the top LCD panel. Turn the rear dial before this indicator goes out, and change “OF” on the display to “On”. 3. While the auto bracketing mark is blinking, turn the front dial to change number of frames (3 or 2), sequence of the shots in 2-shot mode (shown above), and increment (1/3, 1/2, /2/3 or /1-stop). 4. Press the shutter release button. When the shutter button is pressed in auto bracketing mode the shooting sequence and auto bracketing mark blink on the LCD inside the viewnder. Furthermore, the auto bracketing mark blinks, the bracket step width is displayed, and the shooting sequence can be checked on the external LCD. 5. After taking pictures, press auto bracketing set button A, turn the rear dial, set auto bracketing mode to “OF,” and release. Then press the auto bracketing set button or half-press the shutter button to return to the normal display mode. NOTICE: When you want to cancel the auto- bracketing mode, turn the rear dial to change “On” to “OF” SET AEL P Av Tv M X CF A SET AEL P Av Tv M X CF SET AE L P Av Tv M X CF Underexposure Bracketing Margin Overexposure Auto Bracketing Icon Standard Setting selection NOTICE: The letters (n, u, o) indicate the type of exposure (“n” for normal, “u” for underexposure and “o” for over- exposure) and numbers indicate increment (0.3 for 1/3, 0.5 for 1/2, 0.7 for 2/3, and 1.0 for 1/1) By pressing any other button or leaving the camera for 5 seconds, setting for the auto bracketing will be stored. OverexposureNormal exposure Underexposure 47 Single-Frame Mode (S) Press the shutter release button for each shot. The camera meters adequate exposure value for each shot and performs auto-bracketing. The camera stays in the auto-bracketing mode until the last frame of the roll lm is exposed or you cancel the auto-bracketing mode manually. Continuous Mode (C) By pressing the shutter release button once, the camera takes 3 (or 2) shots in series. With each press of the shutter release button, the camera repeats auto-bracketing. The standard (normal) exposure value will be xed when you take the rst frame. When the number of available frames of the current lm is less than 3 (or 2) in the auto-bracketing mode, the “– no – ” mark blinks and the camera automatically cancels the auto-bracketing mode. When you want to cancel the auto-bracketing mode, turn the rear dial to change “On” to “OF”. NOTICE: If you turn the shutter release mode selector to the “C” position before taking three (or two) frames, the camera will restart the auto- bracketing from the initial frame (normal exposure in the default setting). NOTICE: Order of the exposures in 3-shot auto-bracketing can be changed. Custom settings C-10. The way to cancel auto-bracketing mode can be changed. Custom settings C-11. Exposure Mode Setting PProgram AE Shutter speed varies Av Aperture Priority AE Shutter speed varies Tv Shutter Priority AE Aperture varies MManual Mode Shutter speed varies XX-sync mode No setting AE settings under auto-bracketing mode 48 4.4 Taking photos with the mirror up This function prevents mirror-caused vibrations which may blur the image in close-up photography, when shutter speed is slow, when a telephoto lens is used, or when photographing a poster or another picture. When using the mirror-up, Electromagnetic Cable Release RE401 (optional) is recommended. 1. Set the drive dial to “M.UP” 2. Select “S” (single focus mode) by turning the focus mode selector lever. 3. Turn the exposure mode-setting dial to choose any of “P”, “Av” or “Tv” exposure mode. 4. Focus the subject, and determine composition and exposure 5. The mirror moves up when the shutter button is fully pressed. 6. Pres the shutter button again to take pictures. In the manual mode 1. Set the focus mode selector lever at “M” (manual focus mode) position. Turn the lens-focusing ring to focus. 2. Determine the exposure, focusing and frame structure by pressing the shutter release button halfway while looking into the view nder. 3. Lock the mirror up by pressing the mirror-up button. NOTICE: Auto bracketing exposures can be made when the auto bracketing mode is set before taking photos with mirror up. The mirror goes back to the normal position in 30 seconds. This can be changed to 60 seconds or no limitation by the custom function . (See page 89) Keeping the mirror up consumes more power. The mirror will return to the original position if the lens is removed from the camera body. M .UP C S L SET AEL P Av Tv M X CF WARNING: DO NOT point the lens at the sun during the mirror up mode. The sun’s intense light can scorch and damage the shutter curtain. 22 25 2.25 0.7 0.8 ft m 11 4 4 P Av Tv M X CF 49 4.5 Long exposure - Bulb Mode To expose lm longer than 30 seconds, adjust the shutter speed to “B” (bulb). In order to prevent camera shake, use an electromagnetic shutter release and tripod. 1. While pressing the unlock button, turn the exposure mode dial and set it to “M” (manual mode). 2. Turn the front dial to select “bulb”, then turn the rear dial to set the aperture. 3. Determine the composition, focus, and then take the picture. The shutter remains open as long as the shutter release button is pressed. 4.6 Camera display light To see the top display at night or in dark places, press the backlight button A/ . The backlight will go on approximately 10 seconds and go off unless there is another operation. When operating the camera while the backlight is on, the backlight will be lit for approximately another 10 seconds. CF M X Tv Av P SET AEL NOTICE: When releasing the shutter, or pressing the backlight button A/ while the backlight is on, the backlight will go OFF. The backlight can be set to turn on during the camera is holding metered value. Custom settings C-06. NOTICE: As the camera is electronically controlled even during exposures, it is recommended to replace batteries before bulb exposure. Normally the camera can take a picture with a bulb shot up to 60 minutes. However, the bulb shot time can be changed from one minute to infinite. Custom settings C-21. It is possible to set the camera as the shutter remains open until the button is pressed once again. Custom settings C-22. 50 4.7 Front/rear dial lock mechanisms When the Electronic Dial Lock is “On,” all currently set values in “Av” (Aperture Priority AE), “Tv” (Shutter Priority AE) and “M” (Manual mode) cannot be adjusted with the front or rear dials. This prevents accidental change of shutter speed or aperture values. Press down both the multiple exposure mode button and the auto bracketing mode button for approximately one second, until the “On” indicator blinks. To release the mode, hold down the same buttons until “OF” blinks. 3. “L” is displayed on the main LCD to indicate that operation of the front and rear dials is blocked. When the dial lock is ON, the shutter speed and aperture will not change even if you turn the front or rear dial. When you activate the electronic dial lock, and if you then operate the electronic dial, the dial lock indicator “L” on the main panel blinks for three seconds to show that the electronic dial lock is functioning. Av Tv M X P CF SET AEL NOTICE: The setting will be stored after one second. Dial lock can not be set when the exposure mode is “P” (program AE). Even while dial lock is set, the front dial or rear dial can still be used to perform the various settings. (Dial lock is temporarily released.) Av Tv M X P CF SET AEL 51 4.8 Depth of field Depth of eld (D.O.F.) is dened as the zone of sharpness before and behind the plane of focus. It depends on distance to subject, focal length of lens, aperture setting and distance the lens is focused at. In addition to visual observation via the depth of eld preview button, the D.O.F. can be determined by using the depth of eld scale on each lens. The f/stop numbers appear on both the right and left side of the white index mark in the center of the scale. Simply read the gures which appear above the f/stop numbers on the distance scale of the lens. (see illustration below) Depth of Field Preview Button When the preview button is pressed in, the depth of eld for the aperture set on the camera can be checked by looking through the viewnder. After focusing, press the preview button. The diaphragm will be stopped down to the set aperture. NOTICE: While operating the preview button, you cannot release the shutter When the aperture is open (the subject depth is small) When the aperture is stopped down (the subject depth is large) 1.5 22 2211 114 4 1.51.2 2 54 7 ft m 52 4.9 Infrared photography Infrared Photography is complicated when using digital backs, as the digital back is adjusted to match the viewable light perfectly. To make good infrared photography, you need the back adjusted for this or a back dedicated to infrared photography. DO NOT TRY THIS AT HOME – all corrections in this area must be done by Phase One to ensure the precision. If you remove the protectionglass or make other physical adjustments on the back the warranty will immediately be annulled. If you consider Infrared Photography, please contact your local Phase One dealer for technical advice and pricing on this. When taking photos using infrared lm, the position at which the subject is in focus is slightly different than that of regular lms. This is because the infrared rays have a longer wavelength and the image converges behind the lm plane of regular lm. Use the procedure described below when taking photos using infrared lm. 1. Set the focus as usual. Read the point on the distance scale matching the center index of the depth scale. 2. Set the focus mode selector lever to “M” (manual focus mode). Turn the focusing ring clockwise and align the read point to the infrared index. NOTICE: Use a red filter when taking photos using infrared film. Be sure to read the infrared film’s usage instructions. You cannot take photos in AE modes when using an infrared film. 53 54 5.0 Tethered shooting Tethered photography with Phase One is as easy as plug and play can be, even though the quality and technology is advanced, it is created to match all studio environments. 5.1 Connecting Connect the Fire-wire cable to the back of the camera and on the back of your Mac or WinPC – though there can be found computers with Fire- Wire plug-ins on the front, our experience is that the back connection is more stable, and functions better. Capture One will automatically recognize the camera, and settings shared, read more on capturing in the software manual. 5.2 Driver set-up Install Capture One on your computer, follow the instructions provided with the software, and activate the software. There is no specic program set-up except the set-up or recommended hardware provided in the beginning of this user guide as well as in the user guide for Capture One. Eventual rm ware announcements will be available on our website, and in our newsletters. 55 5.3 Tethered operations When operating in a studio, connected to a computer via FireWire you are not dependent on battery power or storage media. You can capture directly to the Phase One Capture One Raw workow software on either Mac or PC, providing power to the P+ back via FireWire without the battery or CF-card inserted. When operating tethered you have the option of capturing the images to the CF-card or transferring captures directly to the currently assigned capture folder in the Capture One application on the computer hard disk. The display on the P+ back can either be turned off while shooting tethered or set to display the images while they are shot, just as if shooting untethered. When unplugging the P+ back from the FireWire, the P+ back will default to untethered mode, capturing to CF-card or microdrive, and using the battery for power. Also when capturing tethered to a laptop with 4 pin mini 1394/FireWire without power it will require a battery in the P+ back. With the (non P+) P 20 and P 25 it is necessary to use the Phase One “No Firewire Power Solution” Part.# 70508 to force battery power. Using the four menu buttons you can setup the preferences for all these operational features. Consult the Capture One 4 manual for detailed intro to the software 56 6.0 The Back The back is a highly developed piece of electronic. The Phase One backs are created to provide a natural and easy workow, without creating unnecessary complicated functions or menubrowsing. You can see the menuowchart here and read more on the menuoptions in this chapter. 57 6.1 CF card usage When working with CF-cards, card readers and digital cameras it is very important to follow a few rules, to avoid loss of data. Phase One recommends that you test-drive all new Compact Flash™ cards (including the one enclosed). By doing an initial test to verify that the capture les are stored properly on the card and can be accessed on a computer you will avoid unpleasant surprises on location or when you return from a job. Compact Flash™ cards are manufactured by other suppliers, and Phase One cannot guarantee that the cards are not defective. Inserting and ejecting cards on the P+ back The compact ash card or microdrive is inserted in the hidden slot located under the cover on the left hand side of the P+ back. Insert the card with the brand label facing the display end of the digital back as shown in the image. When the card is inserted no parts are sticking out, the cover can be closed. To eject the card push the small button just above the card once, and an ejecting pin will come out. Pushing this pin all the way back in will eject the card. Always format your Memory card in the P+ back in general all CF-cards or microdrives comes preformatted and ready to use in the P+ backs. However to ensure the best performance from these cards it is to be considered best practice to always format them in the P+ back. Formatting of the memory card is done in either FAT 16 or FAT 32 depending on card size, and if the formatting is done in the P+ back cluster sizes on the disk is set for best performance. It is however also possible to format the cards on either Mac or Windows, this is explained in the following sections of this chapter. 58 CF card usage – 3S the Secure Storage System When a card is inserted into the P–back, a complete disk check for a valid le structure is performed. For normal CF-cards you will not even notice the extra time it takes larger cards will of course take slightly longer to load Large Microdrives are experienced as slow, but a progress bar showing the status if the time exceeds 2 seconds The progress of a disk check is indicated with series of small dots in the disk icon. It is not recommended to turn off disk checking, but it is possible by selecting the “Disk Checking” available in the “Menu” under “Storage”. Whenever Disk Checking is turned off the capture counter turns red to indicate that the disk has not been checked. Disk Check summary With the 3S technology we have created a new and safe storage system in the P – back that is much more rugged than anything else seen in the industry. • We now offer full formatting support in the P – back (no more need for formatting on the computer) • Damaged or wrong formatted cards will be detected immediately, and we now also have the ability to reformat the cards to correct them. • Ejecting a card while in a writing session will not damage the le structure of the entire CF-card, only the image being written and the images in the buffer can be damaged. • No other digital back or DSLR camera has this level of storage security! 59 6.1 CF card usage in general Ejecting the card while the P+ back is still writing to the card (red LED is on) will cause images that are still not written to the card, to be lost or damaged. Also, ejecting the battery while the P+ back is still writing might result in loss of the data that is not yet written to the memory card. For rescue tips in situations where the P+ back reports that you have a damaged card, please see “Sandisk card and Card reader” section. General handling guidelines Especially when using microdrives you have to be careful not to drop them on the ground or even on a table. Compact ash cards are not as vulnerable as microdrives. Please keep the card away from moisture and sand and don’t bend it. Use the supplied jewel box as a storage container for the card. Using cards in the card reader When inserting the card into the card reader on a Mac or PC, the card will be mounted as a removable drive on the computer. For information on how to import the les to Phase One Capture One, please consult the Capture One online user guide available under “Capture One Help” in the Help Menu. 60 6.2 Mounting and dismounting card on computers On Windows XP and Windows 2000 you can avoid confusing the system or, worst case, end up with a CF-card that was unintentionally erased, it is required to safely eject the card by right clicking the icon in “My Computer” and select the “Eject” option. On a Mac the card have to be unmounted, by dragging it into the trash, or selecting eject in the “File” menu, or ejecting from the Capture panel inside the Capture One software. Just removing it, and reinserting it can confuse the system, possibly resulting in uncontrolled read or write errors. If this happens, restarting the computer usually solves the problems. Preparing the CF-cards Most CF cards are pre-formatted and ready to be used in the P+ back. The P+ back supports cards formatted in either FAT 16 or FAT 32. If your card is not recognized in the P+ back it is possibly due to a wrong le system formatting on the card. Mac HFS, UNIX or NTFS le systems are not supported by the P+ back, and cards with these le systems will not be recognized. The card will have to be formatted in either FAT 16 or FAT 32 using a computer, Mac OS X or Windows. 61 Recommended formatting is by using the back Selecting “Format disk” will erase the CF-card in the P+ back. The CF- card will be formatted as FAT-32. Formatting on a Mac OS X computer On Mac OS X the formatting cannot be done directly in the nder but is easily done inside the Disk Utility located in the Applications > Utilities folder. Open the Disk utility and select the Disk (not just the partition, but the entire disk). Select the Erase panel as shown on next page, and select MS-DOS File System. Give the disk a name and click on Erase to erase and format the entire disk for use with the P+ back. Choose “Options” in the formatting dialog to specify a complete and thorough formatting of the media. Formatting on a Windows computer Insert the CF-card in the card reader, and select the drive when it mounts in “My computer” or in the Explorer. Right click on the drive and select “Format” from the pop-up menu. Select FAT32 or FAT16 from the “File System” pop-up. Give the card a name and click Start to format the card to be used with the P+ back. To specify a complete and thorough formatting of the media resetting all to zeros, do not enable the quick erase option. Disabling iPhoto Autostart (Mac OS X) iPhoto Autostart can be disabled in the Mac system preference. Select “CD’s & DVD’s” and change the setting for “Picture CD” to either “Ignore” or point it to the Capture One Application you are using. 62 6.3 Navigating the Back menu When the P+ back is turned on, the screen will always be in its home display position or “Main screen”. Pressing and holding down the upper left button on the back will also bring you to the Main screen, regardless of where you are in the menu system. The Main screen has an indicator in the top showing remaining captures, and a battery indicator at the bottom showing the remaining battery capacity. When either indicator reaches zero, it will start blinking, to indicate that either storage or battery needs replacement before capturing any more images. The main screen also shows the current ISO setting, white balance setting and IIQ Raw le format selected. Menu buttons The Phase One P+ back has four menu buttons to control the menu system on the display. When the P+ back is in its initial state (just after power up) or at the menu systems “home” position (Main screen), the four buttons each has a shortcut assigned. Play, Menu, ISO and WB. Inside the menu system arrows will indicate the function of the four buttons, the two buttons to the left are used to enter and exit the selected menus. The two buttons to the right are used to go up and down in the menu system. • • 63 From the home position, pressing the “Play” button will bring up the image browser, where you can go up and down with the right hand buttons, to browse through images. Pressing the “Menu”” button will bring you to the menu system where you can scroll up and down in the menu system with the right hand buttons to select the menu options to set. When the desired option is highlighted it can be selected by pressing the “Enter” button. Exiting the menus is done with the “Exit” button. Home shortcut Holding down the “Exit” button for a few seconds will always bring you to the home position or main screen immediately. File format shortcut Holding down the “Menu” button while in Home position colors the word “Menu” in yellow, and at the same time reveals a shortcut with the word “File” in the place where ISO was. Pressing this button at the same time will bring up the File format Menu, where you can select between IIQ Raw L and IIQ Raw S. For more explanation on selecting in the menus please consult the “Menu mode” section later in this manual. 64 Battery and Power Indicator The below screen dumps illustrates the battery and power mode indicators. This is the initial view that meets the user when switching on the P+ back (not connected to a computer) When a FireWire cable is inserted and the P+ Back draws the power from FireWire this is shown with an icon in the bottom of the main menu. When Capture One is started on the computer this is indicated with a FireWire Icon in place of the cable icon. When the P+ back is forced to get power from the battery, this is indicated with an additional battery icon. Force battery power is invoked from the “Conguration > Power Source” menu. Button Lock shortcut Holding down the “Play” button while in Home position colors the word “Play” in yellow, and at the same time reveals a shortcut with the word “Lock” in the place where WB was. Pressing this button twice when holding down the “Play” will lock operation of the four menu buttons. This is useful to avoid unintended button operation while carrying around the camera. To Unlock the buttons hold down the “Play” button again and tab the “Lock” button twice again. When the buttons are locked, the key icon is displayed just below “WB”. 65 6.4 Playmode “Play mode” can be used to review, zoom and delete images. From the “Main screen”, pressing the top left button the P+ back is set to Play mode. In Play mode the top of the screen will show a menu bar. In the right side of the menu bar the current image number and the number of images captured on the media is displayed. In this example it shows number 5 out of 19 images. Pressing the Up and Down buttons on the P+ back (right hand side) it is possible to browse through the images. Holding down the “Enter” button while pressing the up and down icon will bring you to the rst and last image accordingly. Battery life and number of captures left are also shown in the menu bar. Pressing the Play button on the P+ back (top left hand button) will step through the options available in the menu bar. From left to right these are: Review, Zoom and Delete. Pressing the “Enter” button on the P+ back (bottom left hand button) selects the option. 66 View modes Play mode has four view modes, or review modes; Normal image display, Exposure warning overlay, Histogram overlay or File Info overlay. After entering the Play mode, press the “Enter” button to shift to the view mode you want. Exposure warning overlay will knock out the highlight areas as a ashing color, to warn about burned out areas in the image. Histogram overlay will show a transparent Histogram over the image. File Info overlay will show detailed capture information like capture number, capture time, date, ISO, WB, le format and shutter speed, etc. The setting that Play mode is left in will also be the setting used for review of images while shooting. This means, that if the Play mode is set to show images with a histogram, and you then exit to the Main screen. All subsequent captures will be shown on the display with a histogram over the image. 6.5 Playmode – zoom functions After entering Play mode, advance to the eyeglass icon by pressing the Play button again and press the “Enter” button to select it. It is now possible to zoom in the image with the “Enter” button. The zoom has four amounts, Normal, enlargement 1, enlargement 2 and enlargement 3. When zoomed into enlargement 1, 2 or 3 the insert view in the lower left corner can be used to navigate around in the image. A small rectangle will show the current position and the up and down buttons can be used to scroll up and down. 67 An icon just beside the Play button (top left hand button) will show the direction of scroll as either vertical or horizontal. Pressing the “Play” button once will change this from horizontal to vertical scroll when using the up and down buttons. (Left side) To exit the zoom function use the enter key to navigate to the eyeglass icon in the pan view and press the Play button to step to the next icon in the menu bar. Or hold down the “Exit” button for more than 1 second. Browsing inside Zoom While in the Zoom tool (enlargement 1, 2 or 3) holding down the “Enter” key, will hide the two up and down indicators. It is now possible to browse through the images by pressing the Up and Down buttons without leaving the Zoom functions. This means that the exact same focus point can be evaluated on several pictures in a row by pressing “Enter” - “Up” or “Enter” - “Down”. Navigate to the delete function by pressing the play button. When in the delete view press the Up and Down buttons to browse through images. Pressing the Enter button brings up an X or a √ (checkmark). Pressing “Enter” again will select √ and delete the image. Pressing the exit button will select X and cancel the deletion. If “Conrm Delete “ is set to “Off” in the “Play Setup”, the X and √ conrmation will be skipped, and the image deleted immediately when pressing “Enter” while on the delete menu. Exit the Play mode Exit the Play Setup at any time by holding down the Play button for two seconds. 68 6.6 Menu Mode Pressing the lower left button sets the P+ back in “Menu mode” Entering Menu mode by pressing the lower left button allows you to set up all the preferences of the P+ back. Menus are navigated by following the Enter, Exit, Up and Down arrows and pressing the corresponding buttons on the P+ back. Whenever you want to exit to the main screen hold down the exit button (upper left button) for more than two seconds, and you will be back at the main screen. When entering the menu mode you have three options: Capture Setup, Play Setup and Conguration. Capture Setup Capture Setup is where you setup preferences for the capture. Enter the Capture Setup by pressing the enter button (lower left button) In the Capture Setup you can select ISO, WB, FileFormat or Shutter. By scrolling down with the down button you can select the options. Shutter Shutter refers to the shutter of the camera the P+ back resides on. Due to the sleeping architecture of the P+ back, where the CCD is put to sleep to reduce power consumption, the P+ back needs to wake up before shooting. The timing of this wake up signal is referred to as the latency. 69 In general, if the camera is used with medium format cameras with digital interface the setting can be either “Short latency” or “Long latency”. Short latency has a shorter response time, but is power consuming, so when battery time is an issue you should select “Long latency” on the cost of response time from the camera. If the P+ back is used in “two shoot mode” on a large format camera with i.e. a copal shutter or another mechanical shutter where the shutter is released one time for waking up the back, and another time for shooting the image, then the shutter setting in the P+ back should be set to “long latency”. While short latency will respond immediately to triggering the camera, Long latency will not be that fast, but in return you gain much longer battery power. FileFormat In “FileFormat” you can select two options “IIQ Raw L” and “IIQ Raw S”. “IIQ Raw” is a short term for Intelligent Image Quality Raw. “IIQ Raw L” is set as the default and is the loss less capture format of the P+ back. “IIQ Raw S” is a smaller le, and not totally loss less in the format. The “IIQ Raw L” is approximately 1/3 le size of the processed TIFF le. “IIQ Raw S” is approximately 1/5 of the processed TIFF. Most users will use the “IIQ Raw S” as there is virtually no quality difference between the two settings. Please consult the camera specic sections in this manual to learn more about how to use shutter latency with your particular P+ back setup. 70 ISO In the ISO Menu choose from ISO 50 to ISO 1600 depending on the conditions you want (number of ISO options may vary depending on which model P+ back is used). In general the higher ISO, the more noise in the image. This means that for optimal image quality, it is a better strategy to have more light in the scene, or adjusting the f-stop on the camera, than just turning to a higher ISO. When the preferred ISO setting is set press the “Enter” button to conrm the choice (green √ check mark). Or if you regret the choice and just want to go back to the previous setting (the one with the little dot), select the “Exit” button (the red X). White Balance Setting Auto WB will calculate a white balance based on the information in the image. Auto WB is good for most applications. If you are using a specic lightsource you can choose that option here. If the camera back is tethered to a computer, and white balance is set from within Capture One it is indicated with the C1 icon in place of the WB indicator on the main screen. Custom White Balance Your P+ back allows you to create up to 3 custom white balances. Custom WB is available when pressing the WB button at the main 71 When scrolling to the bottom of the WB options four options are available: “Custom1”, “Custom2”, “Custom3” and “CreateWB…” To make a new custom white balance select CreateWB… and choose which one from the following: “Custom1”, “Custom2” or “Custom3” When one of the options is selected the “Make Custom WB” will be blinking. Now you are ready to capture the image that should be used for white balancing. Place the viewnder center circle on an area (gray card or neutral white surface) and capture the image. You have now made the custom white balance and it has been set as the current capture white balance. All subsequent captures will now have the new custom white balance applied. 3 different custom white balances can be dened and used as shooting white balances. Custom white balance from Capture One You can also choose to easily transfer a white balance from Capture One to the P+ back: 1. Create a custom white balance inside Capture One. 2. While tethered to the computer select WB from the lower right button on the P+ back. 3. Select “Custom1”, “Custom2” or “Custom3” on the P+ back depending on where you want to store the new white balance. 4. Finally Click the “Set as capture white balance” button inside capture One. 72 The P+ back will beep, conrming that the custom white balance is now uploaded, will be applied when the P+ back has been disconnected. This technique is useful because you can bring up to three predened custom white balances taken in the studio to your location shoot. Please be aware that when shooting tethered to the computer, the white balance must still be set in the Capture One application. White balance cannot be set on the P+ back while tethered. Play Setup The second option in the menu mode is Play Setup. Inside “Play Setup” you can select between “Backlight”, “Auto Preview”, “Delete options” and “Brightness”. Backlight “Backlight” allow you to setup how many seconds or minutes of inactivity there may be before the light of the display fades. This setting affects the battery life of the P+ back. The more time before light is dimmed, the faster the battery is drained. Auto Preview The second option in “Play Setup” is “Auto Preview”. “Auto Preview” refers to the time the image is remains on the screen after capture. If Auto Preview is set to “Off” the preview will not be shown automatically when a capture is taken. Notice - If a button is touched during the auto preview period, the preview will remain on and the time-out will be disabled until next capture. 73 Delete options There are three delete options: “Conrm On”, “Conrm Off” and “Disable”. In Delete options you can setup whether you want an extra conrmation when you delete images (Conrm On - Default), delete images immediately (Conrm Off), or you can disable deleting of images on the P+ back to avoid unintended loss of images. Brightness In the Brighness setup, you can set the brightnes of the preview LCD screen Default setting is Bright. Only the Brightness of the display is affected. Exposure warning, histogram and nal capture is not affected by this setting. When you are outdoor with much ambient light this is helpful but especially with images taken with low-light, a brighter display is helpful. 74 Configuration “Conguration” is used to setup general settings and perform general tasks on the P+ back. “Conguration” contains more menu entries than can be displayed on one screen. This is indicated by a double arrow pointing down on the right side. Scrolling past the last menu entry will reveal the next entry. Now the arrow in the top right side will turn into a double arrow, to indicate that there are hidden entries at the top. Storage “Storage” is as default set to “Autodetect”. If a card is inserted in the P+ back it will automatically capture to this card. If not, it will try to capture via the IEEE 1394/FireWire port directly to the computer. – if a card is in the back at the same time as the back is connected by FireWire to a computer, this connection will have priority. If the P+ back is not tethered to a computer you will get an error message that the card slot in the P+ back is empty. The P+ back can also be forced to shoot to either Compact Flash or IEEE 1394/FireWire by selecting the options inside “Storage”. Power Source Power source only has two options, “Autodetect” or “Battery”. In Autodetect, the P+ back will detect if an IEEE 1394/FireWire connection is supplying power, and automatically shut of the battery power. If Power Source is set to “Battery” the power source is forced to come from the battery, and the digital back will not consume power from the FireWire connection. This is especially useful to avoid draining the battery in a MacBook or PowerBook. 75 Format disk Selecting “Format disk” will erase the CF-card in the P+ back. The CF- card will be formatted as FAT-32. Please see CF-card section for troubleshooting if your card is not recognized. Disk Checking Disk Checking is done per default on every card inserted to the P+ back. If for some reason this check is not wanted the feature can be turned off in this menu. Phase One recommends leaving diskchecking turned on, to maximize data security on memory cards. Read more about the Phase One Secure Storage system in Chapter 5 of this manual. Power Save Power Save only has two options “Auto Shutdown” and “Backlight”. Auto Shutdown is used to set the time frame before the P+ back Shuts down, when there Is no activity. If the P+ back is automatically shut down it can only be woken up by pressing the “Power” button. 76 Ready beep “Ready beep” is the small beep that sounds from the back when ready for a new capture. The “Ready beep” signals that the P+ back is ready for next capture. “Ready beep” can be either “Single”, “Multi” or “Off”. Default is “Single” “Multi” is for use in noisy surroundings, i.e. where it can be difcult to hear if it was the back or the ash that made the ready beep. Restore def. (Defaults) Selecting restore defaults will restore the settings of the P+ back to its default settings. Be careful before using this option as all settings made in the P+ back will be reset to factory settings. Time & Date In “Time & Date”_76<< 77 Language The “Language” option in the “Conguration” Menu can be used to select preferred language of the user interface. Expressions in the main menu like: ISO, WB, Play and Menu are not translated. These are regarded as icons, and also understood widely as expressions used to navigate even on the Japanese or Chinese interface. However switching to an unknown language unintentionally can be frustrating and the user can have difculty getting it back to the native language back on the menu. Phase One have made this easy by incorporating a large “L” in a parenthesis after the Language menu. Finding this “L” will help the user get the native language back. Currently (When this manual was written) the following languages are supported on the P+: English (default), Japanese, Chinese (Simplied), French, Italian, German and Spanish. If there is sufcient request for more languages, these might be made available through a Firmware upgrade. 78 About the P+ Back The “About” option in the “Setup Menu” displays technical information about the hardware and embedded software (“Firmware”) in the camera. This is especially useful if support is needed or if you want to check if Phase One is offering a newer rmware for your camera. Firmware might be made available in the download section at Before contacting your dealer or Phase One Support please make sure to have access to the “About “box or write down the entire contents of the “About” box. 79 80 7.0 Custom function The functions of Phase One 645 AF is predetermined to work in one way, but you can personalize your camera platform to work the way you prefer. No matter what you do in changes of the platforms workingspace, you can always return to default, read more on this in chapter 7.2 types of custom functions. 7.1 Setting custom functions The custom functions allow you to change the method for using or accessing the camera functions as you like. Take photographs the way you are most comfortable with. The custom functions can store separate settings for 3 users. You can preset the functions for indoor, outdoor or portrait photographs and for other conditions. When at C-00, chose 1 (A), 2 (B), or 3 (C) to store a specic set of user function selections for the group of custom settings from C-01 to C-32. However, if you set C-00 to 0, the settings used will be the default set. With this choice you can change only C-31 to 35. X M Tv Av P CF SET AEL X M Tv Av P CF SET AEL SET AEL P Av Tv M X CF 81 7.2 Types of custom functions C-00 Custom functions No. 0: [Initial setting] 1: A 2: B 3: C When “0” has been selected and set, none of the custom items can be set. “1,” “2” or “3” must be selected and set without fail. C-01 Steps of aperture, shutter speed, exposure compensation. This function is used to set the shutter speed, f-number and exposure compensation value step width. 0: 0.3 (1/3EV step: initial setting) 1: 0.5 (1/2EV step) 2: 1.0 (1EV step) C-02 Data imprinting This function is used to set whether to imprint the shooting data on the lm. 0: No imprinting (initial setting) 1: Yes (data, index) 2: Yes (date, index) C-03 Aperture setting after lens change This function is used to set the f-number display method for the previously used lens when the lenses have been changed over. The initial setting is “Yes,” in which case the f-number of the lens prior to the changeover is displayed. 0: Yes (previous f-number: initial setting) 1: Aperture open 2: Minimum aperture setting 3: Number of stops from open C-04 Metered value display time This function is used to set the time it should take for sleep mode to be established after the camera’s power is turned on. The initial setting is 15 seconds.5, 10, 15, 20, 25, 30, 40, 50, 60 or “on” can be selected and set. Note that the batteries will discharge when “on” (no sleep mode) has been set. C-05 Range of exposure compensation This function is used to set the maximum extent of exposure compensation. Its setting takes effect in AE shooting modes (P, Tv and Av). 0: ±3EV (initial setting) 1: ±5EV C-06 External LCD backlight This function is used to set the method for lighting the backlight of the external LCD panel. 0: Set using Backlight button (initial setting) 1: Always on (metering retention period) C-07 Select battery This function is used to set the batteries used in the camera so that the remaining battery charge will be displayed correctly on the external LCD panel. 0: Primary batteries (alkaline/manganese batteries, lithium batteries: initial setting) 1: Secondary batteries (nickel-metal hydride batteries, nickel-cadmium batteries) C-08 Bracketing order This function is used to set the shooting sequence for auto bracketing. The initial setting is “n-u-o” (normal/under/over). The shooting sequence for 2-frame bracketing is set in auto bracketing setting mode. 0: n-u-o (normal/under/over: initial setting) 1: n-o-u (normal/over/under) 2: u-n-o (under/normal/over) 3: o-n-u (over/normal/under) 82 C-09 Cancel auto bracket This function is used to set the release method of the auto bracketing shooting setting upon completion of auto bracketing shooting. 0: Released by turning the power OFF (initial setting) 1: Until released 2: Released after one shot C-10 Manual mode bracketing This function is used to set whether bracketing is to be performed using the shutter speed or f-number during M (manual) mode auto bracketing shooting. 0: Shutter speed (initial setting) 1: F-number C-11 Front/Rear dial function exchange in manual mode This function is used to change over the operations of the front and rear dials in the M (manual mode). 0: Front dial: shutter speed, rear dial: f-number (initial setting) 1: Front dial: f-number, rear dial: shutter speed C-12 Rear function dial enabled/disabled In the initial setting, exposure compensation can be provided by the sub (rear) dial in P, Tv and Av modes. This function is used to set whether to allocate the operations of the front dial to the rear dial. 0: No (exposure compensation: initial setting) 1: Yes C-13 Dial function direction This function is used to determine the direction in which the electronic dial is to be rotated to increase and decrease shutter speed, the f- number, and exposure compensation. 0: No switching (CCW: decrease, CW: increase: initial setting) 1: Switched (CCW: increase, CW: decrease) C-14 Program shift This function is used to set the type of program shift. Under the initial setting, the shifting is performed along the program line. Av enables aperture priority shift within the possible metering range; Tv enables shutter speed-priority shift. 0: Program shift (initial setting) 1: F-number shift 2: Shutter speed shift C-15 AEL & AFL function button exchange This function is used to set whether to change over the functions of the front and rear AEL and AFL buttons. 0: No (front: AFL, rear: AEL: initial setting) 1: Yes (front AEL, rear: AFL) C-16 Half-press shutter release function mode This function is used to set the AE lock and AF operations when the shutter button is half-pressed. 0: AF operation (initial setting) 1: AF operation/AE lock C-17 AEL function lock/unlock mode This function is used to set the method of operating the AEL button to lock AE. At the initial setting, when the AEL button is pressed, AE is locked; pressing the button again releases the AE lock. At the “1” setting (released after one shot), after AE lock is set, it is released when the shutter is tripped. At the “2” setting, AE lock is set while the shutter button is being pressed. 0: Continuous: initial setting 1: Released after one shot 2: While the shutter button is pressed C-18 Focus indicator selection This function is used to set whether the defocusing mark is to be displayed. 0: Yes (initial setting) 1: No (focusing mark only) 83 C-19 AFL function lock mode This function is used to set the AF lock method when the AFL button is operated. There is a choice between AF locking by pressing the AFL button and performing the AF operation for AF locking and AE locking. 0: Yes (AF lock only: initial setting) 1: Yes (AF operation/AE lock) C-20 M mode one-push setting This function is used to set whether one-push shift operation in manual mode is to be based on the shutter speed or f-number. 0: Shutter speed shift (initial setting) 1: F-number shift C-21 Bulb exposure time setting This function enables bulb shooting by setting the bulb shooting time from 1 to 60 minutes provided that the battery charge lasts. It can be used to decide on the bulb shooting time from 1 to 60 minutes. C-22 Bulb shutter release setting This function is used to set how to operate the shutter button for bulb shooting. At the “0” setting, the shutter is opened and closed while the shutter button is held down; at the “1” setting, it is opened and closed each time the shutter button is pressed. 0: While shutter button is pressed (initial setting) 1: Each time shutter button is pressed C-23 Shutter speed in X mode This function is used to set the shutter speed in X (synchronizing) mode. The initial setting is 1/125 sec. The kind of large ash unit for use in studios has a long ring time and so it may not synchronize at a high shutter speed setting. Take one or more test shots, and set the synchronization speed. 0: 1/125 sec. (initial setting) 1: 1/90 sec. (1/80 sec.*) 2: 1/60 sec. 3: 1/45 (1/40 sec.*) * When the exposure value step width has been set to 1/2 step C-24 Automatic sync speed setting This function is used to set the shutter speed when using the ash unit made by Metz (with the SCA3952 adapter) in P (program) or Av mode. 0: 1/60 to 1/125 sec. (initial setting) 1: Less than 1/125 sec. (metered value) C-25 TTL flash compensation mode This function is used to set whether to link exposure compensation and ash compensation when using the ash unit made by Metz (with the SCA3952 adapter). 0: Not linked (initial setting) 1: Linked C-26 AF beam setting The AF auxiliary light res automatically when the subject is too dark to perform AF, but this function can be used to prevent the AF auxiliary light from ring. 0: Fires (initial setting) 1: Does not re 84 C-27 Flash sync. timing When a moving subject has been shot using the ash, a ash of light will appear ahead of the subject’s movement under the initial setting. This function makes it possible to change this so that the ash of light comes after the moving subject as illustrated. 0: No (front curtain synchronization: initial setting) 1: Yes (rear curtain synchronization) C-28 Copy custom function This function is used to group all the user symbol settings selected (custom functions that have been set) together with the other user symbols, and copy them. 0: No (initial setting) 1: Yes (copied to user A) 2: Yes (copied to user B) 3: Yes (copied to user C) C-29 Custom function reset This function is used to group all the user function settings selected from C-01 to C-27 together, and initialize them (to the default settings). 0: No (initial setting) 1: Yes C-30 Shutter release without film This function is used to set whether the shutter is to be tripped even when the lm has not been loaded. 0: No (initial setting) 1: Yes C-31 Auto film loading setting This function is used to set whether to feed the lm (to the rst frame) by halfpressing the shutter button or by closing the rear cover when the lm has been loaded. The lm can be fed to the rst frame by halfpressing the shutter button even when the rear cover close has been established as the setting. 0: By half-pressing the shutter button (initial setting) 1: By closing the rear cover When the camera is in sleep mode, the lm will not start moving even when the rear cover is closed. Half-press the shutter button. C-32 Multiple exposure mode This function is used to select whether, during multiple exposure shooting, the multiple exposures are to be taken by pressing the shutter button until the multiple exposure button is pressed after the number of multiple exposures has been set (initial setting). When the number of multiple exposures is set, the lm is wound up by one frame after the completion of the number of multiple exposures. 0: Until the multiple exposure button is pressed (initial setting) 1: Multiple exposure number is set C-33 Digital back CF configuration This function is used to select the user function (A, B or C) when an MSCElisted digital pack has been loaded. 0: No (initial setting) 1: A 2: B 3: C C-34 Clock/calendar setting This function is used for setting the calendar and clock. C-35 Index setting This function is used for setting the index numbers. – ONLY relevant when using lmback. C-36 firmware version - Press+/- to view rmware version: number on top of display is body rmware. Number on bottom of display is rmware number for the mounted lens. 85 Custom Functions overview No. Item Initial setting (0) 1 2 3 C-00 Custom Function User standard User A User B User C C-01 Steps of aperture, shutter speed, Exposure compensation 1/3 EV step 1/2EV step 1 EV step C-02 Data imprint No Data, Index Index, Date C-03 Aperture setting after lens change Previous aperture value Open Minimum C-04 Metered value display time 15 sec. 1:20 2:25 3:30 4:40 5:50 6:60 7:ON 8:5 9:10 C-05 Range of exposure compensation ± 3EV ± 5EV C-06 External LCD backlight set using backlightbutton C-07 Battery type Alkaline/manganese (standard batteries) Ni-CD, Ni-MH C-08 Bracketing order Normal-Under-Over N-O-U U-N-O O-N-U C-09 How to cancel Bracketing Power OFF Push bracketing button Release after one session C-10 Bracketing variable Shutter speed Aperture value C-11 Front/Rear dial function exchange in manual mode Front:Tv Rear :Av Front:Av Rear :Tv C-12 Rear function dial enabled/disabled in P, Av, Tv Enabled Disabled C-13 Dial function direction Right: decrease Left: increase Right: increase Left: decrease C-14 Program PH/PL PHigh/PLow Av Shift Tv Shift C-15 AEL & AFL function button exchange Front: AFL Rear : AEL Front: AEL Rear : AFL C-16 Shutter half-press function AF operation AF operation & AE Lock C-17 AEL function lock/unlock mode Until turned off Released after one shot While the shutter button is pressed C-18 Focus Assist On Off C-19 AFL function lock mode No AF operation before locking AF operation before locking 1st push Lock 2nd push unlock C-20 M-mode in one push setting Shutter speed shift Aperture value shift C-21 Bulb exposure time setting 60 minutes 1:Off 2:1 3:2 4:4 5:8 6:15 7:30 C-22 Bulb shutter release setting While shutter button is pressed start/stop exposure when shutter is pressed C-23 Shutter Speed in X mode 1/125 1/90(1/80) 1/60 1/45 C-24 Automatic sync speed 1/60 to 1/125 metered value (less than 1/125) C-25 TTL ash compensation mode On Off C-26 AF beam setting On Off C-27 Flash sync. timing 1st Curtain 2nd Curtain C-28 Copy custom function No copy Copy to A Copy to B Copy to C C-29 Custom function Keep settings Reset settings C-30 Shutter release without lm No Yes C-31 Auto lm loading setting Shutter button half-pressed Closing rear cover C-32 Multiple exposure mode stop Until multiple exposure button is pressed multiple exposure number setting C-33 Custom function preference Standard settings User A User B User C C-34 Clock/Calendar setting C-35 Index setting C-36 Firmware version Press EV+/-, the top number is body rmware the below number is lensrmare Only relevant when using lmback 86 8.0 Lenses and Multi Mount Phase One provides the widest range of possibilities, when it comes to lenses, this increases the possible creative solutions for the photographer, This chapter is looking closer at some possible lenses, but it is worth noticing that there are more lenses usable than what we show here. The enthusiastic user can nd loads of information on the internet as well as mount-adaptors, like the Phase One Multi-Mount. Phase One will of course always recommend our products, but we are well aware that creativity will only grow co-operation and by using the free market and free choices that follows the market. Please note, errors or damage caused by thirds party products is not covered by the warranty, test new product with curious caution. 8.1 Functions of the Phase One lens The Phase One 80mm f.2,8 - a sharp, new and well tested digital photography prepared lens. The lens is mounted by aligning the white dot on the lens right in front of the white dot on the camerabody, carefully mount the lens by turning it clockwise, until a clicksound is heard, if you feel resistance or if you hear a grate-like sound stop and retry – NEVER use force when mounting the lens, it should always slide in without resistance. The lens has 2 rings, the inner ring provides the possibility of changing the focus mode without changing grip of the body, keep the focus selector on the camera body on either “S” or “C” to decide whether focusing single or continuously, and decide whether to do autofocus or manual focus on the inner ring of the lens. The focus ring is the outer ring on the lens, use this ring to manually set the focus, read more on focusing in the chapter 3.4 regarding autofocus. Notice: If you select MF on the camera body, you might have to turn the camera off before the autofocus will start. 87 8.2 Function of the Phase One lens adaptor To mount the Phase One Multi-Mount, match the white dot on the camera up with the white dot on the Multi-Mount and turn slowly clockwise, NEVER use force to mount the ring. When the Phase One Multi-Mount is mounted you can t Carl Zeiss/Hasselblad V and Hasselblad 200series lenses on the camera. 88 8.3 List of alternative lenses Recommended digital lenses Producer specs limitations adaptor/mount notice Mamiya 28 f.4,5 AFD Mamiya 645AFD Sekor Mamiya 75-150 f.4,5 Mamiya 645AFD Sekor Recommended lenses Mamiya 35 f.3,5 Mamiya 645AFD Mamiya 45 f.2,8 Mamiya 645AFD Mamiya 55 f.2,8 Mamiya 645AFD Mamiya 150 f.3,5 Mamiya 645AFD Mamiya 210 f.4,0 Mamiya 645AFD ULD Mamiya 300 f.4,5 Mamiya 645AFD APO Mamiya 55-110 f.4,5 Mamiya 645AFD Mamiya 105-210 f.4,5 Mamiya 645AFD ULD Producer specs limitations adaptor/mount notice Recommended MF lenses Mamiya A 500 f.4,5 1+2 Mamiya 645 MF Mamiya A 300 f.2,8 1+2 Mamiya 645 MF+APO Mamiya A 200 f.2,8 1+2 Mamiya 645 MF+APO Mamiya 55 1+2 Mamiya 645 leafshutter Mamiya 80 f.2,8 N/L 1+2 Mamiya 645 leafshutter Mamiya 150 f.3,8 N/L 1+2 Mamiya 645 leafshutter Mamiya 105-210 f.4,5 1+2 Mamiya 645 Mamiya 500 f.5,6 1+2 Mamiya 645 Mamiya 55-110 f.4,5 N 1+2 Mamiya 645 Mamiya 150 f.2,8 1+2 Mamiya 645 Mamiya 300 1+2 Mamiya 645 Mamiya 24 f.4,0 1+2 Mamiya 645 Mamiya 35 1+2 Mamiya 645 Mamiya 150 f.3,5 N 1+2 Mamiya 645 Mamiya 45 1+2 Mamiya 645 Mamiya 210 N 1+2 Mamiya 645 Mamiya 80 f.1,9 1+2 Mamiya 645 Mamiya 55 1+2 Mamiya 645 Mamiya 80 f. 2,8 N 1+2 Mamiya 645 Hartblei MC TS-PC 45 f. 3,5 mamiya/Pentacon six super-rotator tilt/shift Hartblei MC Hartblei 2x converter pentacon six Producer specs limitations adaptor/mount notice Arsat MC Arsat 30 f.3,5 sheye Pentacon six Arsat MC Arsat 45 f.3,5 Wide Angle Pentacon six Arsat MC PCS Arsat 45 f.3,5 Pentacon six shift Arsat MC PCS Arsat 55 f.4,5 Pentacon six shift Arsat MC PCS Arsat 65 f.3,5 Pentacon six shift Arsat MC Arsat 80 f.2,8 Pentacon six Arsat MC Arsat 600 f.8,0 Pentacon six Mirror Lenses usable in combination with Phase One Multi-Mount Carl Zeiss CFi 30 f.3,5 3 hasselblad V Carl Zeiss CFE 40 f.4,0 3 hasselblad V Carl Zeiss CFi 50 f.4,0 3 hasselblad V Carl Zeiss CFi 60 f.3,5 3 hasselblad V Carl Zeiss CFE 80 f.2,8 3 hasselblad V Carl Zeiss CFi 100 f.3,5 3 hasselblad V Carl Zeiss CFE 120 f. 4,0 3 hasselblad V Carl Zeiss CFi 150 f.4,0 3 hasselblad V Carl Zeiss CFE 180 f.4,0 3 hasselblad V Carl Zeiss CFi 250 f.5,6 3 hasselblad V Carl Zeiss CFE 350 f.5,6 3 hasselblad V SA Special purpose lenses Mamiya 120 f.4,0 MACRO Mamiya 645 MF Mamiya 50 SHIFT 1 Mamiya 645 MF Mamiya 645 Auto bellows unit 1 Mamiya 645 Mamiya 80 MACRO 1 Mamiya 645 Other lenses usable in combination with adapter Hasselblad 30 sheye Hasselblad 40 Hasselblad 50 Pentacon ektogon 50 Arsat 55mm Shift Biometar 80mm Biometer 120mm Sonnar 180mm Limitationcodes: 1: Stopped down metering not possible 2: Discontinued 3: Leaf shutter disables, only aperture priority 89 8.4 Lens Cast What is Lens Cast? Lens cast may occur if using the camera back with wide- angle lenses e.g. Horseman Digiex II, Hasselblad Flexbody or Hasselblad 905SWC or on other large format cameras with different tilt or swing settings. On a medium format camera lens cast is very rare if using xed lenses from 60 mm to 120 mm. Why does lens cast occur? Lens cast occurs as a result of the angle at which the CCD is exposed to light. If the CCD is exposed to light coming from a very sharp angle e.g. wide-angle or extreme degrees of tilting you may experience lens cast. What does it look like? Depending on the light conditions and photographic setup, lens cast can appear differently, on some lens/back combinations there will appear to be a transition from green-ish to magenta-ish, but blue, red - all colors can appear, if you want to test your lens for color cast, take a photo of a grey wall or cardboard. And check this image for colors. How to get rid of Lens Cast? If working with large format cameras with tilt and swing, you would have to make a new calibration le if you change the tilt and swing position. Phase One provides a solution in the Capture one software that helps you get rid of the lens cast. We call it: Lens Cast Calibration (LCC). By holding an opal white plate in front of the lens and capturing a calibration image that you then apply to all of your capture les you are able to remove the lens cast. On medium format cameras the calibration is very simple: You do one calibration for each lens and then save the calibration les and apply them when needed by clicking “Set as default for new Captures”. 90 8.5 4 simple steps to calibrate on fixed lenses (MAC) 1. Hold the calibration plate in front of the xed lens (as close as possible), and capture. In order to ensure correct exposure you may have to up a few f-stops or in a very dark setup, put on more light directly onto the plate. 2. In the Capture One software you select the ‘calibration’ image and click on the “Save LCC” button (the LCC tool is located under the grey balance tab). 3. Give the calibration le a name that corresponds to the lens in use or the set-up (e.g. 45mmDaylight) 4. Select the calibration le: “45mm Daylight” from the “Lens CC” drop- down list and click “Set as default for new Captures”. 8.6 Large format and stitched images (MAC) Large format and stitched images (Mac) When using LCC in combination with large format capture and image stitching you must capture one calibration le per image and make sure that you match the calibration le to the right image prior to stitching. 1. Start by capturing the two calibration les and the two image les. 2. Save the right-side calibration le by clicking the “Save LCC” 3. Name the calibration le i.e.CarsRight. 4. Save the left-side calibration le the same way. 5. Select the right-side image and apply the right-side calibration le. Select the left-side image and apply the left-side calibration le. 6. A simple way to gray calibrate is to select all images in the thumbnail window. Click on the left-side center of the right side calibration le and apply that gray balance to all images by clicking the “Apply to all selected” (remember to only select apply gray balance in the dialog). PLEASE NOTE: As soon as grey calibration is done and the calibration files are saved and appear in the Lens CC drop-down box the calibration files can be deleted from the thumbnail window. 91 8.7 4 simple steps to calibrate on fixed lenses (PC) 1. Hold the calibration plate in front of the xed lens (as close as possible), and capture. In order to ensure correct exposure you may have to open up a few f-stops or in a very dark setup, put on more light directly onto the plate. 2. In the Capture One software you select the ‘calibration’ image and click on the “Generate…” button (the LCC tool is located under the white balance tab). 3. Give the calibration le a name that corresponds to the lens in use or the set-up (e.g. 45mmDaylight) 4. Select the calibration le: “45mm Daylight” from the “Lens CC” drop- down list and set a checkmark in the “Apply LCC for next Captures”. If working with large format cameras with tilt and swing, you would have to make a new calibration le if you change the tilt and swing position. When using LCC in combination with large format capture and image stitching you must capture one calibration le per image and make sure that you match the calibration le to the right image prior to stitching. 92 8.8 Large format and stitched images (PC) 1. Start by capturing the two calibration les and the two image les. 2. Save the left-side calibration le by clicking the “Generate…” button 3. Name the calibration le i.e.CarsLeft. 4. Save the right-side calibration le the same way. 5. Select the right-side image and apply the right- side calibration le. Select the left-side image and apply the left-side calibration le. 6. A simple way to white balance is to select all images in the thumbnail window and click on the left-side center of the right side calibration le and apply that white balance to all images by clicking the “Apply this White Balance to the current selection of captures”. 93 94 9.0 Software Capture One 4.1 Digital Back Only, is a part of the new Phase One camera platform. For further information regarding functions of Capture One 4.1 please read the users guide for this, the user guide is found under the menu help on win and on MAC. 9.1 Getting started The user interface of Capture One 4.1 is very close to the original Capture One 4, you will nd the well known tabs Library, Quick, Color, Exposure, Composition, Details, Adjustments, Meta, Process and Batch, besides these we have added a new tab, Capture; this tab provides of course the possibility of making the Capture via the connected FireWire cable. You can control certain parts of the camera settings such as ISO format and White Balance. Read the software manual before using the software, and do test shots before using the software for professional use. 95 9.2 Importing from CF card Like using e.g. win-explorer it is easy to browse to a disk containing RAW images on a local or network computer. You can also choose to import directly from a memory card in a card reader. Choose File > Import images or select the import images button to begin the import process. Immediately, a large dialogue box appears showing a preview of les to be imported. This dialogue box also provides a range of options form which to choose. Inserting a memory card into a card reader will automatically bring up the import dialogue window. The Import window offers a range of options to make importing a quick and straightforward task. It is important to remember that you are importing images from one location to another. You need to create or dene a folder to which the les will be imported to. This can be done manually or through the Locations tab in the le importer window. Capture One 4 can automatically create subfolders, named by date or user dened. When importing you can also choose to rename the les, as they are imported from the camera or cd/memorycard, the le names can be changed by double-clicking on the lename in the browser or when exporting the les. 96 10.0 Large format and technical cameras Phase One’s status as open platform does not only mean the possibility of tting the back on different medium format cameras, but also large format and Technical cameras. 10.1 Large format photography You can do large format photography, digital captures with the Phase One back. As the light sensitive chip in the P-back is not (yet!) 4x5” or 8x10 you have to use an adapter to move the back to capture the entire view, the FlexAdapter is a sliding back used to connect a Phase One back to a large format camera. A ground glass is provided for initial set-up that slides over to position the digital back in the perfect orientation to the focused area. Markings are provided for the stitching function which allows two captures to be taken, beside each other with a slight overlap. Capture One software automatically stitches these together with the built in stitch tool. The design is simple and clever, using a standard lens board mounted on the back of the adapter for specic large format systems. Currently there are versions for Sinar, Cambo, Arca Swiss, Linhof and Toyo systems. Large format cameras that use a lens board for mounting the ground glass assembly can be custom adapted (custom adaptation service not provided by Phase One). All Phase One camera solutions from the P 21+ to the P 45+ can be used with the FlexAdapter and all backs can provide stitched images. Capture One PRO provides stitch function to put the captures together in one large format le; this is still used in architectural photography. - Please read the specic largeformat leaets, and consult your local dealer, you will be amazed of the possibilities. 97 10.2 Technical cameras The use of technical cameras is growing. Images taken with a technical camera can have different look and feel compared to DSLR or medium format capture. The look is achieved through unique focal lengths, use of rise/drop and shift movements available since photography began and a different optical point of view. For many photographers, quality cannot be compromised. A technical camera provides signicantly more optical quality especially when combined with a Phase One back and Capture One software. The optical path is straight and simple with no mirror systems to worry about. This removes the need of retro-focus design wide-angle lenses that compromise image quality with DSLRs and mediumformat. Both Rodenstock and Schneider have produced technical camera lenses that are tuned to the capture area and quality requirements for digital photography. A technical camera solution offers the sharpest possible results. For more information on technical photography consult your local dealer. 98 11.0 Maintenance In general very little maintenance is needed, but this is a professional tool, and should be treated with care and caution. If the gear for some reason have not been used for a period, you should always do test shots before the photographic session. A frequently used product should be inspected periodically at the nearest ofcial Phase One repair center. Should there be errors or malfunctions of camera, lens or back – do NOT try to repair – consult your local dealer. 11.1 changing the focusing screen 1. Remove the lens. 2. Pull the Focusing Screen Release lever A forward, as illustrated, with the tweezers to let the Focusing Screen down. 3. Remove the Focusing Screen from the Focusing Screen Frame by grasping the tab on the edge of the screen with tweezers as illustrated. 4. When installing the screen, pinch the tab of the screen with tweezers, and put the screen on the screen frame. 5. Push up the screen frame using the tweezers until hearing a clicking sound. The screen is now properly installed. - Never press down on other parts as this will affect the focus function. NOTICE Since the Focusing Screens’ surfaces are soft and easily damaged, handle them carefully. Never touch the surface with bare fingers. Should dust settle on it, merely blow away by using a blower. If the Focusing Screen needs cleaning, send it to the nearest authorized Phase One service center. Do not attempt to clean the surface of the Focusing Screen, as it is very delicate. Do not touch and damage the mirror in any way. A 99 11.2 Battery socket Never leave batteries in the socket, if the camera or back is not supposed to be used for longer periods. Keep contacts clean and dry at all times. 11.3 Tripod/Electronic shutter release contact Keep all contact clean and dry at all time. When using a tripod with 3/8” screw (instead of 1/4” screw) remove the small screw A from the tripod screw hole on the bottom of the body using a plus screwdriver, then use a coin to remove the tripod screw adapter bushing B. You will nd Electronic shutter release both on the camera body and on the back. When used, it is recommendable to use the shutter release on the back. Keep both contacts dry and clean. A B 100 11.4 Camera display error-notification LCD Display Causes and Remedies Main LCD panel Viewfinder LCD readouts Problems Remedies If the camera cannot focus in the AF “S” (single) mode, you cannot release the shutter. Try to adjust focus again, or change to the focus lock mode or manual focus mode The indicator appears when the battery capacity is low. Replace with new batteries The shutter will not operate when the digital back is not mounted on the body. If you press the shutter release, this symbol will appear. Mount a digital back. This symbol appears when setting the custom functions but no choice of user is made. Select a user before changing custom settings. CF-00 While in manual exposure mode, and when the difference between the set value and metered value exceeds + or – 6EV, this indicator will appear. Change aperture or shutter speed. This will appear when a lens is not mounted Try mounting a lens…! When “Err” appears, some abnormality has been detected in the course of taking photos. Replace with new batteries and press the shutter release button, if the “Err” does not disappear then contact your local dealer. batt batt -no- Fb -no- Fb - u - - o - F- - - - Err- 01 Err- 02 Err- 03 Err- 04 Err- 05 Err- 06 Err- 07 101 11.5 Lens maintenance Never touch the inner optics of the lens with your ngers, keep the inneroptics perfectly clean with air, lens brush or the dry cloth delivered with the lens. Do not touch the contacts; keep the contacts clean, either by dry cloth or by using berglass brush, do not use tools of any kind on the lens. The lens is not waterproof, if wet it should be dried with a cloth, if exposed to salt, moisten a cloth, wring it and clean. 11.6 Back Maintenance Cleaning the CCD When the Phase One P+ back is not attached to a camera, the camera back must be protected with the protection plate. However, over time dust may accumulate on the IR lter. This will degrade the image quality if not removed. Please follow the directions included in the CCD cleaning kit in the P+ back suitcase. 102 11.7 housing specification Camera Type : 6x4.5cm format, electronically controlled focal-plane shutter, TTL multiple mode AE, AF Single Lens Reex Actual Image Size : 56x41.5 mm Lens Mount : Mamiya 645 AF Mount, compatible with M645 Mount (manual focus conrmation, focus aid, stopped-down exposure metering) Viewfinder : Fixed prism viewnder magnication x0.71; built-in diopter adjustment (-2.5 to +0.5, optional diopter correction lenses provide adjustment ranges of -5 to -2 diopter and 0 to +3 diopter); built-in eye-piece shutter Focusing Screen : Interchangeable, Matte (standard), Checker, and Microprism Type C for Non-AF M645 lenses. Field of View : 94%* of actual image Viewfinder Info : Focus mark, defocus mark, warning mark, aperture value, shutter speed, metering mode (A, S, A/S), exposure compensation value (difference between set value and metered value) and ash ready / OK lamp with TTL Metz connection. AF method : TTL phase difference detection method; sensor: CCD line sensor (I+I type); operating range: EV0 to EV18 (ISO 100) Focus area : Display the focus area in the viewnder screen AF assist beam : Activates automatically under low light, low contrast. Range: 9m, Automatic switching to ash unit’s built-in assist beam if Metz ash unit is attached. AF Lock : By pressing the shutter release button halfway down in the AF-S mode, or by pressing the AFL button. Exposure Modes : Aperture-priority AE, shutter-priority AE, programmed AE (PH, PL setting possible), and manual AE metering mode : TTL metering, center-weighted average (AV), spot (S), and variable ratio (A-S auto) Shutter increments : Both the shutter speed and the aperture level can be set to 1/3 or speed and aperture 1/2 using the electronic dial lock function Metering Range : EV 2 to EV 19 (with ISO100 lm, f/2.8 lens) Exp. comp. : ±3 EV (1/3 step) Expandable to ±5 EV Film speed : ISO 25 to 6400 AE lock : With AEL button; canceled by pressing the button again or shutter release Shutter : Electronically controlled vertical metal focal-plane shutter. (vertical travel) Shutter speed : AE 30 to 1/4000 sec. (1/8 step), manual 30 to 1/4000 sec. (1/2 or 1/3 steps), X, B (Bulb, electronically controlled), shutter curtain protection mechanism (open when magazine is removed, automatically closed when magazine is attached) Auto bracket shot : Enable with auto bracket button (2 frame shots, or 3 frame shot with auto bracketing). Specify 1/3, 1/2, 2/3 or 1EV steps. Flash Synchro : X contact point, 1/125 seconds (when 1/3 step is selected it can be set between 1/40 and 1/125 seconds). Flash control : TTL direct ash control, supports Metz SCA3002 system (SCA3952 Adapter) Multiple Exposure : Enable with multiple exposure button (the number of exposures can be set from 2 to 6). It can be canceled in the middle and the number of exposures can be changed, or you can switch to an arbitrary multiple exposure style. Mirror up shot : Select by pressing the mirror up button. LCD displays : Main LCD display: Program mode mark, custom function mode mark, AF area mark, battery level indicator, manual focus mode, superimpose mode, dial lock mark, shutter speed, AE lock mark, aperture value, multiple exposure mode mark, exposure compensation mode mark, ash compensation mark, exposure compensation value, self-timer mark, auto bracket mark, time mark (while setting the clock). Data Imprinting : 7 segment dot matrix; DATA mode: exposure mode, aperture value, shutter speed value, exposure compensation, metering mode, ID number; DAY mode; year, month, date, time, ID number, ID mode; ID number Sync terminal : X contact (sync speed 1/125 sec.) Cable release : On shutter button Remote terminal : On side of body; electromagnetic cable release Self-Timer : 2 to 60 sec. (standard: 10 sec., can be set in 1 sec. steps between 2 and 10 sec., and in 10 sec. steps between 10 and 60 sec.) Depth-of-field confirmation : Preview Button on body Custom settings : 35 items + Firmware Information Tripod Socket : U 1/4 inch and U 3/8 included Power Requirements : 6 AA-size batteries (alkaline-magnesium, lithium) External power socket : An external battery case can be connected. Size & Weight : 6 ”(W)X5 ”(H)X7.3 ”(D) / 153(W)X128(H)X184(D)mm : 3.8 pounds / 1,730 g (W/O battery) * This information is based on a linear (horizontal/vertical) measurement. 103 11.8 P+ series Technical specifications Please read the schedule for detailed overview of the different backs. 104 11.9 End User support Policy Please check for updated support policy By purchase of a Phase One product we guarantee you World Class Support and Service! World Wide Dealer Network At Phase One we think globally but act locally. Phase One’s products are sold through a world wide network of dedicated and competent local partners to make after-sales support convenient for you. Phase One’s local partners offer rst, then please contact Phase One directly, and we will assist you directly or through one of our partners. Find your local Phase One partner or take advantage of Phase One’s wide range of on-line support tools at. 105 FAQ, Tutorials & Documentation The FAQ is a collection of the most frequently asked questions and related answers in the Phase One Knowledge Base. Use the FAQ as the first and best place to find answers on many technical questions. If you are seeking more detailed information about Capture One, Portrait One, or our Digital Backs, you can download user guides and manuals or watch some of the tutorials available. Knowledge Base Phase One’s searchable Knowledge Base at provides you with detailed answers to most of your questions. This ‘self-service’ site is free of charge and available to all Phase One owners. Capture One On-line Support Forums On Phase One’s official support forum you may share your experiences and get assistance from other Phase One owners as well as from Phase One’s Technical Support team. Some Phase One partners offer on-line support forums, hosted from their own web pages. Please note that these forums are governed by separate rules. Phase One offers no guarantees and assumes no responsibility or liability with respect to the support provided by our local partners. On-Line Support You can contact Phase One Technical Support directly by creating an on-line support case on Phase One Technical Support will do it’s best to answer your question as quickly as possible. 106 />
https://usermanual.wiki/Phase-One/PhaseOne645AfUsersManual518342.959685904/html
CC-MAIN-2021-21
refinedweb
23,887
62.07
First you need to install the Serial library: apt-get install python-serial The Sparkfun adapter uses FTDI so it shows up as /dev/ttyUSB0 I found this, it works really well. Simple TCP/IP bridge I started with this link: I couldn’t get it to show data from the Xbee at first so I used Minicom to test: Somehow it started working once I saw data using Minicom. Here is the final code to test read from serial with Python[xbee_read.py] #!/usr/bin/python import serial ser = serial.Serial('/dev/ttyUSB0', 9600) ser.open() try: while 1 : result = ser.readline() print result except KeyboardInterrupt: ser.close() I was able to see the data from my Smart Outlet. I used both the xbee read and the tcpip/serial by using putty on my computer to telnet to the pi. Now I don’t need to use my desktop as a Serial-TCP/IP gateway and I can use Python to process the data from the Smart Outlet or anything else. Or talk directly to my robots. This raspberry Pi is becoming more helpful by the day. Now I am not sure if multiple devices can use the serial at the same time, have to look into that or combine a the TCP/IP server with extra later for processing data. Maybe I could make all my Python script use the TCP/IP gateway instead of Serial. I might use API mode on the xbee to talk to multiple xbees or have packet headers to know which device they come from. Update: The first time I did this it worked, then while writing this I couldn’t get it to work again. I will have to try again later with a powered USB hub, but I am pretty sure that isn’t the problem. It is the same problem I have originally where there isn’t any output from the Xbee. Now Minicom just exits instead of showing the data from the Xbee. 08/28/2012 Update 2: I ended up connecting the Xbee to the UART. You need to disable getty first which you can find here: After I got it working on the UART minicom still wouldn’t work. Then I realized it has to be run with SUDO! D’oh! Now I might try the USB again, but reading it via Python didn’t work over USB anyway, which I did run with Sudo. At least now I freed up a USB port. I have to at least try USB again. 08/28/2012 Update 3: I tired again with USB and it works but only if you run Minicom first using sudo. Once you do that then the Python script will work. I am going to stick with using the UART which works perfectly. If you get it to work reliably let me know in the comments. I was hitting the same thing with my scripts: I found that after you exit Minicom, it would work. If you look at the settings right as you exit Minicom (stty -F /dev/ttyUSB0) you'll see that the speed is set to 0. If you run stty -F /dev/ttyUSB0 speed 0 you'll get some bogus error that it can't complete, but it actually does set the speed to 0. I added that shell command to the start of my scripts and it works every time now. I still haven't had time to dig into the root cause though.
https://mobilewill.us/quicknote-raspberry-pi-python-serial-updated/
CC-MAIN-2020-24
refinedweb
585
80.51
When defining a method on a class in Python, it looks something like this: class MyClass(object): def __init__(self, x, y): self.x = x self.y = y I like to quote Peters' Zen of Python. "Explicit is better than implicit." In Java and C++, ' this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't. Python elects to make things like this explicit rather than based on a rule. Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way.
https://codedump.io/share/w0Jehd7I6Isa/1/why-do-you-need-explicitly-have-the-quotselfquot-argument-into-a-python-method
CC-MAIN-2017-04
refinedweb
111
69.38
Code Sample: OAuth 2.0 Delegation Published: April 7, 2011 Updated: April 9, 2013 Applies To: Windows Azure This sample illustrates an end-to-end scenario that includes a resource protected by Windows Azure Active Directory Access Control (also known as Access Control Service or ACS) and a client application that consumes this resource. The web application uses the OAuth protocol, version 2.0, draft 13, and accesses the ACS OAuth 2.0 endpoint to obtain an access token for the resource. This version of the sample targets the ACS production environment in the Windows Azure portal (). The code for this sample is part of the Windows Identity Foundation Extension for OAuth CTP and can be downloaded at. Follow the steps outlined below to run the sample. Prerequisites - Windows Server 2008 R2 or Windows 7 - Visual Studio 2010 - Internet Information Services (IIS) enabled with IIS Metabase and IIS6 Configuration Compatibility - Windows Identity Foundation () Scenario Description This sample depicts two different companies. One company, Contoso, has a Representational State Transfer (REST) web service for information about certain Contoso employees that is exposed on the web. This service contains two components: the End-user endpoint, which allows users of the service to log in and delegate permission to applications to access their data, and the Service endpoint, which requires an OAuth access token issued by ACS in order to access data in it. The second company in the solution, Fabrikam, offers a portal in which subscribers to the Contoso employee information can view the information. This web server application performs the role of the “OAuth client”. If an access token has expired, the Fabrikam site is configured to automatically obtain a new token using the refresh token that ACS sent with the original access token. Projects included in this sample Set up the Sample The sample must be configured to target a tenant. Use the following procedure: - Run SetupSample.bat in the Sample Root Directory - Open the Solution File - Set up the ACS - Enter the configuration information into the sample file. Run SetupSample.bat in the Sample Root Directory Run the SetupSample.bat batch file with administrator permissions. The directory which you run the samples from must not be located in the folder C:\Users (which includes the Desktop and Documents folders) for the samples to function correctly. This will set up the Internet Information Services (IIS) with virtual directories for the samples and will set up the certificates necessary to use https in the samples. Open the Solution File Open the solution file, OAuth with ACS.sln, in Visual Studio 2010 as an administrator. Set up ACS To create an Access Control namespace Go to the Windows Azure Management Portal, sign in, and then click Active Directory. To create an Access Control namespace, click New, click App Services, click Access Control, and then click Quick Create. (Or, click Access Control Namespaces before clicking New.) Enter a name for your namespace. This name will be a unique subdomain in the accesscontrol.windows.net domain. For example, if you enter contoso, your Access Control namespace endpoints will reside as paths of. Also, in the value of the ServiceNamespace variable in the SamplesConfiguration.cs sample application, enter the name of your Access Control namespace. The portal verifies that the name that you selected for your Access Control namespace is valid and available. If it is not, enter a different name. Select a region and then click OK. The system now creates and activates your Access Control namespace. You might have to wait several minutes as the system provisions resources for your account. To create a new Service Identity To manage an Access Control namespace, select the namespace, and then click Manage. (Or, click Access Control Namespaces, select the namespace, and then click Manage.) This action opens the ACS portal. You can use the portal to configure the Access Control namespace. It allows you to add identity providers, configure relying parties, define rules and groups of rules, establish the credentials your relying party will trust, and it offers tips about how to integrate into your relying party. Click Service identities. In this sample, we will create a new service identity to be used as the OAuth credential that the Fabrikam site uses to access the Customer Information Service. To create a new service identity, click Add. In the Name field, enter FabrikamClient. From the Type list, select Password, and in the Password field, enter FabrikamSecret. To add a new Relying Party Application Click Relying party applications. To create a new application, click Add. Enter Customer Information Service for the name of the relying party application, enter for the Realm, and for the Return URL. For the token format, select SWT, and clear the Windows Live ID box. To generate a symmetric key that will be used by the service to validate the signature of the SWT tokens it receives, under Token Signing Options, click Generate. Enter this key in the sample application in SamplesConfiguration.cs as the value of the RelyingPartySigningKey variable. Click Save. Enter the Configuration Information into the Sample To enter the configuration information into the sample By now, the Access Control namespace name and the Relying Party Signing Key have been entered in the SamplesConfiguration.cs file in the Visual Studio solution. To enter the Management Service Key in the ManagementServiceIdentityKey variable in SamplesConfiguration.cs, click Management service, click Management Client, and then click Symmetric Key. This key is used to communicate with the ACS Management Service. To add the final configuration to ACS, in Visual Studio, right-click ConfigureAcsConsoleApplication, click Debug, and then click Start new instance. When the console application completes, press any button to continue. Scenario walkthrough The following diagram shows a high-level view of the OAuth protocol flow of this sample. In this diagram, the green boxes represent OAuth-specific components that use the WIF extensions to handle the protocol flow separately from the main web site. The red box represents ACS as a cloud service that has been configured to interact with the various sample components. The blue boxes represent the other components that a developer implementing this scenario would consider. This flow uses the Authorization Code access grant type. This is used when the OAuth client is capable of interacting with the end-user’s web browser or other user agent. In this case, the client redirects the user’s browser to the End-user endpoint to authenticate with the service that will be accessed. After the user has been authenticated, the user has to consent to the client’s access request. When the user grants access, the browser is redirected back to the client with an Authorization code with which an access token can be requested. The access token is stored once it is received and will be attached to subsequent requests to the service until it expires. ACS also sends refresh tokens which can be used to request new access tokens without having to redirect the user away from your site. Running the Sample To run the sample If the solution is not already open, open it in Visual Studio. Build the solution. In a web browser, navigate to. This site will access data from the CustomerInformationService on your behalf. Click Populate all data. Your browser will be redirected to the Customer Information Service login page. Enter the following credentials: - User name: John - Password: password Click Log In. You will be directed to the delegation site where you can choose to allow the Fabrikam Information Portal to have access to your data. Click Submit. This will return you to the original site with the data populated. The OAuth Client has used the information in the response to request an OAuth “Access token” that it attaches to a request to the Customer Information Service. This token will be used in subsequent requests so that you no longer need to provide credentials to log in to the site. The contents of the token are displayed in a table. Using Refresh Tokens The Fabrikam website is configured to store and use the refresh token provided by ACS when the access token expires. Use the following steps to configure ACS to issue short-lived tokens and see new tokens issued. To use refresh tokens In the ACS Management Portal, click Relying Party Applications. Click Customer Information Service. This will take you to the Edit Relying Party Application page. Change the value of Token lifetime (secs) to 5 so that the tokens will only be valid for a short period of time. Click Save. Open a new browser window or clear your browser’s cache. On the Fabrikam OAuth Client website, click Populate all data or Get customer data by ID multiple times and notice the contents of the access token change. The information in the token will be the same except for the ExpiresOn time and the signature. Troubleshooting - If you receive an HTTP Error 500.21 or 500.19 Internal Server Error when running the sample on a computer with more than one version of the .NET Framework installed or where .NET was installed before IIS, follow the instructions at ASP.NET IIS Registration Tool () to register ASP.NET with IIS. - If after clicking the Populate all data button on the Fabrikam Client Website, the page remains unchanged, verify that you have run the ConfigureAcsConsoleApplication project. Alternatively, attach a program to monitor the Http traffic to see any error messages contained in the URL as the browser is redirected.
http://msdn.microsoft.com/en-us/library/windowsazure/gg185937
CC-MAIN-2013-20
refinedweb
1,583
55.84
Rather than get all items, our API users may want to find a specific entity in the database. For this kind of request, we can use CrudRepository methods like .findById(Integer id). Continuing the example of the Person model, we would add this endpoint to our PersonController: import java.util.Optional; // Other imports and class scaffolding omitted @GetMapping("/people/{id}") public Optional<Person> getPersonById(@PathVariable("id") Integer id) { return this.personRepository.findById(id); } The endpoint, "/people/{id}", uses a path parameter so that the user of the API can pass in the id in the URL in their request. The responsibility of the controller class is to extract this id from the request and pass it into the .findById method in the repository. Note that this method returns an Optional<Person>. This is because the user may pass an ID that does not exist in the database. The Optional type from Java will allow the response to gracefully return null in the response body if the id supplied could not be found in the database. A request to this endpoint might look like: # Request for an ID that exists curl localhost:4001/people/3 # { # "id": 3, # "eyeColor": "brown", # "name": "Aneeqa Kumar", # "age": 23 # } # Request for an ID that does not exist curl localhost:4001/people/50 # null Instructions Make another GET endpoint that will allow you to fetch a single plant from the database. The endpoint should be available at "/plants/{id}" (using id as a path parameter). Make the method for this endpoint have the name getPlantById, and ensure the name of the path parameter is id. This method should return an Optional<Plant>, similar to how the getPersonById method returns an Optional<Person>. Like you did before for the getAllPlants method, use curl to test your getPlantById method. Try it out with any of the Plant IDs that you saw in the response for the previous endpoint you implemented.
https://www.codecademy.com/courses/learn-spring/lessons/add-a-database-with-jpa/exercises/make-queries-to-your-database-findbyid
CC-MAIN-2022-27
refinedweb
321
60.95
Get sidetracked? ________________________________ From: Francis Galiegue <fge@one2team.com> To: Ant Users List <user@ant.apache.org> Sent: Monday, May 4, 2009 2:26:40 PM Subject: Re: Reset BuildNumber Le Monday 04 May 2009 22:02:04 Eric Fetzer, vous avez écrit : > Thank you very much Francis! I really appreciate the hand holding as I > transition over to Ant... > You're more than welcome, since you have taught me a way to use ant-contrib tasks without having namespace issues like I did in the past (<taskdef> when ant-contrib is in the classpath already) ;) The solution is in progress, and will be yours in an extensible enough half-hour... I need to test it, but it makes good progress ;) --
http://mail-archives.apache.org/mod_mbox/ant-user/200905.mbox/%3C467892.97223.qm@web65702.mail.ac4.yahoo.com%3E
CC-MAIN-2014-42
refinedweb
121
70.13
#include <stdio.h>void initialize_arr(float *, int n);float summation(float *, int n);void main(){ int N=500; float arr[N], tot; initialize_arr(arr, N); tot = summation(arr, N); printf("tot = %f\n", tot);} arr_cal.f90 subroutine initialize_arr(arr, n) bind(c) use, intrinsic :: iso_c_binding real(kind=c_float) :: arr(n) integer(kind=c_int), value :: n call random_number(arr)end subroutinefunction summation(arr, n) result(tot) bind(c) use, intrinsic :: iso_c_binding integer(kind=c_int), value :: n real(kind=c_float) :: arr(n), tot tot = sum(arr)end function On AIX, the following commands are used to compile and link the program. $ mpxlf90_r -c arr_cal.f90$ mpcc_r –c main.c$ mpxlf90_r arr_cal.o main.o –o test1 The executable can be executed on a cluster of machines by using poe command. Before using the poe command, a host file needs to be created to specify on which hosts the program is executed. In addition, the same directory (with the same absolute path as the current directory on the local host) has to be created on all the remote hosts.host.list The command to execute the program on the listed hosts is shown as follow: $ mcp ./test1 –procs 4 –hfile host.list$ poe ./test1 -procs 4 -hfile host.listtot = 248.259613tot = 248.259613tot = 248.259613tot = 248.259613 The option –procs is to specify how many tasks are created. In this case, four tasks are created to execute the same program (test1) on different hosts as specified in the host file (host.list). What the mcp command does is to copy the executable to the remote hosts. The option –labelio specifies that the output from the parallel tasks is labeled by task id. $ poe ./test1 -procs 4 -hfile host.list -labelio yes 2:tot = 248.259613 3:tot = 248.259613 1:tot = 248.259613 0:tot = 248.259613 This example is a simple program to illustrate how the XL compilers work with PE. If you need to develop a program that exploits a distributed environment, PE is an essential tool. PE also provides a parallel debugger (pdb) for debugging parallel programs.In this blog, we briefly describe the PE product and how it is used with the XL compilers for exploiting the parallel environment. Of course, the program can be much more complicated and useful than the simple one discussed here. Some applications decompose the problem to smaller size and distribute it to different hosts to work on. After the work finishes, the application collects the data from different hosts for the final result. In addition, the poe command is demonstrated here to use for executing the program on any remote hosts.
https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/xl_compilers_and_parallel_environment?lang=en
CC-MAIN-2019-51
refinedweb
438
57.57
We have seen in previous posts what is machine learning and even how to create our own framework. Combining machine learning and finance always leads to interesting results. Nevertheless, in supervised learning, it is crucial to find a set of appropriate labels to train your model. In today’s post, we are going to see 3 ways to transform our data into a classification problem and 1 to transform it into a regression one. What is ‘labeling’? Labeling is the process of designing a supervisory signal for a set of data so that a model can infer properties from it. In other words, a label is an outcome we want our model to learn. We say that labeled data are annotated data. Like features, the way we label our data contains information about the problem itself. That is why is so important to do it right. Binary Labeling Let’s start with the simplest one. The easiest way to label returns is to assign a label depending on the returns sign: we label positive returns as class 1 and negative returns as class 0. We can call this method binary labeling. def binary_labelling(data, name='Close'): """Binary labelling. Label the data according to its sign. If it is positive, if will be labeled as 1, if it is negative, it will be labeled as 0. Returns equal to zero, if any, will be left as nan. Parameters ---------- data : pandas.DataFrame or pandas.Series The data from which the labels are to be calculated. The data should be returns and not prices. name : str, optional, default: 'Close' Column to extract the labels from. Returns ------- labs : pandas.DataFrame A pandas dataframe containing the returns and the labels for each return. """ # labs to store labels labs = pd.DataFrame(index=data.index, columns=[name, 'Label']) # get indices for each label idx_pos = data[data[name] > 0].index idx_neg = data[data[name] < 0].index # assign labels depending on indices labs[name] = data labs.loc[idx_pos, 'Label'] = 1 labs.loc[idx_neg, 'Label'] = 0 return labs Result of applying this method to the XAUUSD relative returns time series. The main drawback of this procedure is that it does not capture the differences in magnitude from two returns of the same sign; e.g. 0.01 has the same label as 1000. Therefore, it is not a very appropriate algorithm in most cases (but still useful to build intuition). Fixed-time horizon The first thing we can do to take into account these differences is to add a threshold from which the labels are computed. In chapter 3 of [1], by Marcos López de Prado, a method called Fixed-time horizon is presented as one of the main procedures to label data when it comes to processing financial time series for machine learning. The method is simple and can be defined by the following expression: $$ y_{i} = \begin{cases} -1, & \text{if $r_{t0,t1} < – \tau $} \\ 0, & \text{if $| r_{t0,t1}| \leq \tau $} \\ 1, & \text{if $r_{t0,t1} > \tau $} \end{cases} $$ def fixed_time_horizon(data, threshold, name='Close'): """Fixed-time horizon labelling. Compute the financial labels using the fixed-time horizon procedure. See references to understand how this method works. Parameters ---------- data : pandas.DataFrame or pandas.Series The data from which the labels are to be calculated. The data should be returns and not prices. name : str, optional, default: 'Close' Column to extract the labels from. threshold : int The predefined constant threshold to compute the labels. Returns ------- labs : pandas.DataFrame A pandas dataframe containing the returns and the labels for each return. References ---------- .. [1] Marcos López de Prado (2018). Advances in Financial Machine Learning Wiley & Sons, Inc. .. [2] Marcos López de Prado - Machine Learning for Asset Managers. """ # to store labels labs = pd.DataFrame(index=data.index, columns=[name, 'Label']) # get indices for each label idx_lower = data[data[name] < -threshold].index idx_middle = data[abs(data[name]) <= threshold].index idx_upper = data[data[name] > threshold].index # assign labels depending on indices labs[name] = data labs.loc[idx_lower, 'Label'] = -1 labs.loc[idx_middle, 'Label'] = 0 labs.loc[idx_upper, 'Label'] = 1 return labs Results of applying the fixed-time horizon method to the XAUUSD relative returns. Fixed-time horizon applied to XAUUSD relative returns. This method improves the binary labeling procedure, but it works assuming the market remains static (no regime changes, no volatility clustering [3], etc) due to the fixed threshold value. Can we do better while keeping a simple procedure? Yes, we can. Quantized labeling Ideally, we would want our method to automatically adapt reasonably well to changes in the market. Why don’t we use the varying properties of the returns distribution in our favour? That is exactly how quantized labeling [2] works. Quantized labeling consists in bucketizing the returns into categories derived from the quantile values. Computing the categories using a sliding/expanding window gives us the dynamic behaviour we seek. def quantized_labelling( data, n_labels, name='Close', window=None, fillnan=None, mode=None ): """Quantized labelling. Label the data according to a quantile calculation. The quantiles can be computed in rolling or expanding modes, as well as for the whole dataset at once. Parameters ---------- data : pandas.DataFrame or pandas.Series The data from which the labels are to be calculated. The data should be returns and not prices. n_lables : int The number of labels you want to compute. name : str, optional, default: 'Close' Column to extract the labels from. window : int, optional, default: None The period size to compute the rolling/expanding quantiles. fillnan : object, optional, default: None If not None, the remaining rows, after bucketing, whose values are NaN will be filled with the passed value. mode : str, {'rolling', 'expanding', None} If None, the data will be bucketed using the whole dataset. If 'rolling' or 'expanding', the data will be bucketed using the selected mode, with a window equals 'window' parameter. Returns ------- labs : pandas.DataFrame A pandas dataframe containing the returns and the labels for each return. References ---------- .. [1] Udacity - AI for trading """ def get_qcuts(series, quantiles): """Helper function """ q = pd.qcut(series, q=quantiles, labels=False, duplicates='drop') return q[-1] name = 'Close' q_val = 1 / n_labels quantiles = np.arange(0, 1+q_val, q_val) labs = pd.DataFrame(index=data.index, columns=[name]) labs[name] = data if mode is None: qc = pd.qcut(data[name], q=quantiles, labels=False) # concat to avoid errors with indexes labs = pd.concat([data, qc], axis=1) labs.columns= [name, 'Label'] else: if window is None: raise ValueError(f"'window' with value {window} is not valid.") else: pd_obj = getattr(data, mode)(window) labs['Label'] = pd_obj.apply( lambda x: get_qcuts(x, quantiles), raw=True ) # fill nans if fillnan is not None: labs.fillna(fillnan, inplace=True) return labs Note in the code above that the procedure can be applied in rolling, expanding, or for the whole dataset at once. Here is the result of applying quantized labeling to XAUUSD relative returns (we set n_labels to 7). Labeling for regression The last algorithm we are going to see allows us to transform our data into a regression problem. Hence, the labels will be continuous. The idea is simple: we apply a rolling window on our returns and select n past returns and 1 future return as a label. def unfold_ts_for_regression( data, look_back=20, look_ahead=1, ): """Unfolds ts for regression. This functions receives as input a time series and returns two sets, X and y. Parameters ---------- data : pandas.DataFrame, pandas.Series or numpy.array The time series to process. look_back : int, optional, default: 20 The number of days to look back to predict the next day. look_ahead : int, optional, default: 0 If 'look_ahead' is 1, the label will be the next data of the batch. If it is greater, the labels will be 'look_ahead' data of the batch. Returns ------- X : numpy.array An array containing the features. y : numpy.array An array containing the labels. """ if isinstance(data, pd.DataFrame) or isinstance(data, pd.Series): data = data.values elif isinstance(data, list): data = np.array(data) elif isinstance(data, np.ndarray): pass else: raise TypeError(f"Non-supported data type: {type(data)}") X = [] y = [] if look_ahead == 1: _range = range(0, len(data) - look_back) else: _range = range(0, len(data) - look_back - look_ahead) for idx in _range: batch_end = idx + look_back ahead_end = batch_end + look_ahead - 1 local_X = data[idx:batch_end] local_y = data[ahead_end] X.append(local_X) y.append(local_y) return np.array(X), np.array(y) It seems complicated but it is not. Let’s see an example with a list of dummy values to understand the function. x = [a for a in range(10)] X, y = unfold_ts_for_regression(data=x, look_back=2, look_ahead=1) The above lines output the following arrays for X and y respectively: # x = array([[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 8]]) # y = array([2, 3, 4, 5, 6, 7, 8, 9]) See? It is just a sliding window that looks n values in the past (look_back) and selects a value from the future to forecast (look_ahead). Each iteration creates a new row in the features and labels matrix. Let’s plot the results in an animated gif to see the sequence: Be careful using this function, because you may incur a problem called overlapping outcomes (see chapter 4 of [1] for more information). Conclusions In this post, we’ve briefly seen 4 simple ways to label your financial data. There are more complex procedures out there, like triple-barrier [1] that I encourage you to study and test. Bibliography [1] Marcos López de Prado – Advances in Financial Machine Learning. [2] Udacity – AI for trading. [3] Rama Cont – Volatility Clustering in Financial Markets: Empirical Facts and Agent–Based Models.
https://quantdare.com/4-simple-ways-to-label-financial-data-for-machine-learning/
CC-MAIN-2022-40
refinedweb
1,609
59.6
NAME memfd_create - create an anonymous file SYNOPSIS #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <sys/mman.h> int memfd_create(const char *name, unsigned int flags); DESCRIPTION mem On success, memfd_create() returns a new file descriptor. On error, -1 is returned and errno is set to indicate the error. ERRORS VERSIONS The memfd_create() system call first appeared in Linux 3.17; glibc support was added in version 2.27. CONFORMING TO The memfd_create() system call is Linux-specific. NOTES The). EXAMPLES Below #define _GNU_SOURCE #include <stdint.h> : %jd; fd: %d; /proc/%jd/fd/%d\n", (intmax_t) getpid(), fd, (intmax_t)); } SEE ALSO fcntl(2), ftruncate(2), mmap(2), shmget(2), shm_open(3) COLOPHON This page is part of release 5.09 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
https://man.cx/memfd_create(2)
CC-MAIN-2022-21
refinedweb
144
67.25
In this example, we demonstrate on using a for loop to find the total sum of an array. We have talked about property. Here, we used an array property Length to find the number of element in the array. This is shown in line 13 where we access the length of the array using array.Length. This number is needed for the for loop to calculate the total. using System; namespace MyArray { class Program { static void Main() { double[] array = { 2.5, 3.2, 5.21, 6.0 }; double sum = 0; int length = array.Length; //finding the length using array propertye Length for (int i = 0; i < length; i++ ) { sum = sum + array[i]; } double average = sum / length; Console.WriteLine("Sum: {0}", sum); Console.WriteLine("Average: {0}", average); Console.ReadLine(); } } } The program begins by initializing a variable sum to 0. Then it loops through the values stored in array, adding each one to sum. By the end of the loop, sum has accumulated the sum of all values in the array. The resulting sum is divided by the number of elements to calculate the average.
http://codecrawl.com/2014/08/20/csharp-finding-sum-average-array/
CC-MAIN-2016-44
refinedweb
183
67.55
Publishers of technology books, eBooks, and videos for creative people Home > Blogs > Easier Coding in Flash Builder Flash Builder, like any good IDE, can facilitate programming in several different ways. By taking advantage of what Flash Builder can do, you can write ActionScript code more quickly and, most importantly, with better accuracy. The first thing you should remember when using Flash Builder is to let it do as much of the work as possible. For example, Flash Builder can automatically build the shell of event handlers, service calls, and ActionScript classes for you (among other things). Taking the event handler example, if you're in Source mode, when you add an event property to an MXML component, you'll be offered the opportunity to generate the event handler (Figure 1). In Design Mode, there's a button in the Properties panel you can click to do the same. The generated event handler will use the naming scheme componentName/Type_eventHandler, such as button1_clickHandler. By default, the handler will take an event argument, return no value, and be protected: protected function button1_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } Another benefit of having Flash Builder generate this for you is that the required Event classes will be imported automatically, so you don't have to worry about forgetting that. On a similar note, through code completion and code hinting, Flash Builder will automatically suggest or enter components, properties, variables, and so forth for you. You'll be prompted as you code -- just use the arrow keys and press Enter/Return to select the appropriate choice from the list (Figure 2). If you keep typing, you'll narrow your selections. If the suggestions don't immediately appear, press Control+Space to activate them. Repeatedly pressing Control+Space will cycle among suggestion types: properties, events, effects, and styles. Code completion and code hinting works within both MXML and ActionScript, too. If you use this feature, you'll have less typing to do but also be assured that you're using the proper spelling, capitalization, namespaces, etc. Complex applications can quickly become overwhelmed with code. At best this will just slow down your development time (searching around for stuff) and at worst it can lead to problems. To keep things tidy, you should first consider using external ActionScript files. Within Source mode, though, you can easily find references to functions, variables, and components by pressing Alt+O (Windows) or Command+O (Mac OS X). This will bring up the Outline view in a separate pane. To make the code easier to read, you'll see that Flash Builder will automatically handle indentations for you as you type, even for code you paste. If the indentations get out of whack, select Source > Correct Indentation to have Flash Builder clean up. Another way to make the code easier to peruse is to make use of folding: the ability to hide blocks of code. Beside line numbers corresponding to the beginning of functions, control structures, and the like, you'll see a minus sign in a circle. Click that once to collapse the block. The icon will turn into a plus sign which you can later click to expand the block of code again. In the interim, you'll still be able to see what the code block is, without seeing its entire contents (and an ellipses will indicate that some code is being hidden). Finally, if you decide at some point in time later that you'd like to use a different name for a function or class, don't try to change it yourself. You'll spend too much time chasing down references to the item and you're likely to miss some. Instead, select the item's name, then choose Source > Refactor > Rename. At the prompt, you can provide a new name and choose to update all the references (Figure 3). If you're renaming a class, the prompt will differ slightly. Learn Adobe Animate CC for Multiplatform Animations (2018 release): Adobe Certified Associate Exam Preparation
http://www.peachpit.com/blogs/blog.aspx?uk=Easier-Coding-in-Flash-Builder
CC-MAIN-2018-13
refinedweb
672
60.14
A clear and powerful Take a look at the online demo at: or you can visit these websites who use Zinnia. - Fantomas’ side - Ubuntu’s developers blog - Tryolabs - AR.Drone Best of User Videos - Professional Web Studio - Infantium - Rudolf Steiner School of Kreuzlingen - Vidzor Studio LLC - Bookshadow - Future Proof Games - Detvora Club - Stumbling Mountain 0.18.1 - Fix messed relationships in Python 3…v0.18.1 0.18 - Compatibility with Django 1.10…v0.18 0.17 - Compatibility with Django 1.9 - Fix RSS enclosure - Fix paginator issue - Implement Entry.lead_html method - Usage of regex module to speed up…v0.17 0.16 - Improve testing - Improve documentation - Reduce queries on entry_detail_view - Implement custom templates within a loop - Add a publication_date field to remove ambiguosity - Remove WXR template - Remove usage of Context - Remove BeautifulSoup warnings…v0.16 0.15.2 -ia namespace - instead of django.contrib.comments…v0.14.2 0.14.1 - - Full Python 3.0 support - Django 1.5 is no longer supported - Better support of custom User model - Improvements on the archives by week - Fix timezone issues in templatetags and archives - Database query optimizations in the archives views…v0.14. Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-blog-zinnia/0.18.1/
CC-MAIN-2018-17
refinedweb
217
50.53
Investors considering a purchase of Mohawk Industries, Inc. (Symbol: MHK) shares, but tentative about paying the going market price of $193.74/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the February 2016 put at the $175 strike, which has a bid at the time of this writing of $6.20. Collecting that bid as the premium represents a 3.5% return against the $175 commitment, or a 6.3% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to MHK's upside potential the way owning shares would, because the put seller only ends up owning shares in the scenario where the contract is exercised. And the person on the other side of the contract would only benefit from exercising at the $175 strike if doing so produced a better outcome than selling at the going market price. ( Do options carry counterparty risk? This and six other common options myths debunked ). So unless Mohawk Industries, Inc. sees its shares fall 9.5% and the contract is exercised (resulting in a cost basis of $168.80 per share before broker commissions, subtracting the $6.20 from $175), the only upside to the put seller is from collecting that premium for the 6.3% annualized rate of return. Below is a chart showing the trailing twelve month trading history for Mohawk Industries, Inc., and highlighting in green where the $175 strike is located relative to that history: The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the February 2016 put at the $175 strike for the 6.3% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Mohawk Industries, Inc. (considering the last 252 trading day closing values as well as today's price of $193.74) to be 22%. For other put options contract ideas at the various different available expirations, visit the MHK Stock Options page of StockOptionsChannel.com. In mid-afternoon trading on Tuesday, the put volume among S&P 500 components was 822,953 contracts, with call volume at 926,867, for a put:call ratio of 0.89.
https://www.nasdaq.com/articles/commit-buy-mohawk-industries-175-earn-63-annualized-using-options-2015-07-28
CC-MAIN-2019-39
refinedweb
391
64.91
I spent the whole evening trying all sorts of circuits and understanding the meaning of "open drain" and "sink capability".I ended up with conclusion that if I feed +5V, an open drain transistor like the DS2406 PIO pins with sink capability of 0.4V means it can go from +5.0V to +4.6V.It can't go from +5.0 to +0.4V, though I'd like if it could work like this. Do you agree? #include <OneWire.h>// DS2406 PIO switch management exampleOneWire ds(10); // on pin 10void setup(void) { // initialize inputs/outputs // start serial port Serial.begin(9600);}void loop(void) { int HighByte, LowByte, TReading, SignBit, Tc_100, Whole, Fract; byte i; byte present = 0; byte data[16]; byte addr[8]; byte status[0]; status[0]=7; if ( !ds.search(addr)) { Serial.print("No more addresses.\n"); ds.reset_search(); return; } Serial.print("R="); for( i = 0; i < 8; i++) { Serial.print(addr[i], HEX); Serial.print(" "); } if ( OneWire::crc8( addr, 7) != addr[7]) { Serial.print("CRC is not valid!\n"); return; } if ( addr[0] != 0x12) { Serial.print("Device is not a DS2406 family device.\n"); return; } ds.reset(); //reset bus ds.select(addr); //select device previously discovered ds.write(0x55); //write status command ds.write(0x07); //select location 00:07 (2nd byte) ds.write(0); //select location 00:07 (1st byte) ds.write(0x1F); //write status data byte (turn PIO-A ON) Serial.print ("VALUE="); //read CRC16 of command, address and data and print it; we don't care for ( i = 0; i < 6; i++) { data[i] = ds.read(); Serial.print(data[i], HEX); Serial.print(" "); } ds.write(0xFF,1); //dummy byte FFh to transfer data from scratchpad to status memory, leave the bus HIGH delay(2000); //leave the things as they are for 2 seconds ds.reset(); ds.select(addr); ds.write(0x55); ds.write(0x07); ds.write(0); ds.write(0x3F); //write status data byte (turn PIO-A OFF) Serial.print (" VALUE="); for ( i = 0; i < 6; i++) { data[i] = ds.read(); Serial.print(data[i], HEX); Serial.print(" "); } ds.write(0xFF,1); delay(2000);} I tried different values for PIO-A pull-up resistors but when I measure voltage between +5V and PIO-A it's always +5V or +4.6V, the difference is exactly the sink voltage which is strange. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php/topic,12355.0.html
CC-MAIN-2015-22
refinedweb
422
70.8
On Thu, Jan 15, 2009 at 10:19:58AM +0000, Daniel P. Berrange wrote: > > +#ifdef __sun > > + { > > + ucred_t *ucred = NULL; > > + const priv_set_t *privs; > > + > > + if (getpeerucred (fd, &ucred) == -1 || > > + (privs = ucred_getprivset (ucred, PRIV_EFFECTIVE)) == NULL) { > > + if (ucred != NULL) > > + ucred_free (ucred); > > + close (fd); > > + return -1; > > + } > > + > > + if (!priv_ismember (privs, PRIV_VIRT_MANAGE)) { > > + ucred_free (ucred); > > + close (fd); > > + return -1; > > + } > > + > > + ucred_free (ucred); > > Can move the ucred_free up before priv_ismember() call and thus > avoid the need for the call in the cleanup path. Nope, privs points into the ucred structure. > Is the chmod of the socket really required for solaris ? We already I'll double check these. > Also, if this libvirtd is running as UID 60, so the chown really > needed ? We also setgid before creating the socket so that it > gets desired group ownership at time of creation, rather than > having to change it post-create. At this point in the code (I think) we're still root. > This would appear to make it try to change the socket ownership > and permissions, before we've actually created the socket, which > is much later in the main() method where we call NetworkInit Hmm, let me re-check. > > +#ifdef __sun > > + /* > > + * On Solaris, all clients are forced to go via virtd. As a result, > > + * virtd must indicate it really does want to connect to the > > + * hypervisor. > > + */ > > + name = "xen:///"; > > +#endif > > This should not be neccessary if the client end + drivers are > correctly written. Can you explain a bit more? Why don't we need to rewrite the URI as xen? > If you want Xen to always go via the demon, the only change that should > be required, is to make xenUnifiedOpen() return VIR_DRV_OPEN_DECLINED. Hmm, yes you might be right. Let me experiment. > > if (err == 0) { > > - error (in_open ? NULL : conn, > > - VIR_ERR_RPC, _("socket closed unexpectedly")); > > + DEBUG("conn %p: socket closed unexpectedly", conn); > > return -1; > > } > > } > > These two I/O methods here have been completely re-written in my > thread patches. Why is removing the error messages required ? If we try to connect w/o privilege, then the socket is closed straight after accept() - so it's not longer an RPC error for this to happen. > > + /* > > + * If the connection over a socket failed abruptly, it's probably > > + * due to not having the right privileges. > > + */ > > + if (sigpipe) > > + vshError(ctl, TRUE, _("failed to connect (insufficient privileges?)")); > > + > > It will also be seen if the daemon drops the connection due to an > OOM condition, or the max-clients limit being exceeded, so perhaps > a little more detailed message. Suggestions? thanks john
https://www.redhat.com/archives/libvir-list/2009-January/msg00276.html
CC-MAIN-2014-15
refinedweb
412
72.97
Wendy, you could most certainly use a LRUMap with a fixed size. Give each item a unique key and let the Map take care of uniqueness. LRUMap will take care of discarding the least recently used entry once it reached the maximum defined size, and the Iterator returns most recently used to least recently used. This would be the easiest way to do this, by far. Or you could do something that takes more work, but I think it more fun: Define a Predicate: import java.util.Collection; mport org.apache.commons.collections.Predicate; public class UniqueInCollection implements Predicate { private Collection collection; public UniqueInCollection(Collection collection) { this.collection = collection; } public boolean evaluate(Object o) { return !collection.contains( o ); } } Then use a CircularFifoBuffer married to the Predicate. Only downside is that you have to catch IllegalArgumentException throw by PredicatedBuffer: import java.util.Iterator; import org.apache.commons.collections.Buffer; import org.apache.commons.collections.Predicate; import org.apache.commons.collections.buffer.CircularFifoBuffer; import org.apache.commons.collections.buffer.PredicatedBuffer; public class RecentlyVisited { public static void main(String[] args) { Buffer buffer = new CircularFifoBuffer(5); Predicate unique = new UniqueInCollection(buffer); Buffer recentVisited = PredicatedBuffer.decorate(buffer, unique); add( recentVisited, "Page 1" ); add( recentVisited, "Page 2" ); add( recentVisited, "Page 3" ); add( recentVisited, "Page 4" ); add( recentVisited, "Page 1" ); add( recentVisited, "Page 2" ); add( recentVisited, "Page 1" ); add( recentVisited, "Page 21" ); add( recentVisited, "Page 22" ); add( recentVisited, "Page 1" ); Iterator i = buffer.iterator(); while( i.hasNext() ) { String value = (String) i.next(); System.out.println( value ); } } public static void add(Buffer recentVisited, String page) { try { recentVisited.add( page ); } catch( IllegalArgumentException iae ) { // do nothing, buffer will complain if predicate fails. } } } Mattias Jiderhamn wrote: > Possibly it could also be a MRU (Most Recently Used) cache. > > At 2005-07-03 23:39, you wrote: > >> I'd say you were looking for an ordinary priority queue, where the >> priority=the timestamp. Try the Heap class. >> >> Sincerely, >> Silas Snider >> >> On 7/3/05, Wendy Smoak <java@wendysmoak.com> wrote: >> > I'm looking through the Collections API, but not finding exactly >> what I >> > want... hoping someone who's more familiar with it can point me in >> the right >> > direction. >> > >> > What I'm trying to do is more or less what you see on catalog sites >> where >> > they'll list the most recent items you've looked at, newest on >> top. So it's >> > ordered (List), but has no duplicates (Set), and I need to have a >> max size. >> > >> > ListOrderedSet is almost there, except that it retains the 'old' >> position if >> > you add the same item again. (And has no max length.) >> > >> > So... before I either write it myself or extend ListOrderedSet to >> make it do >> > what I want, does anyone have another suggestion? And what would >> _you_ call >> > it? >> > >> > Thanks, >> > Wendy Smoak > > > > --------------------------------------------------------------------- >
http://mail-archives.apache.org/mod_mbox/commons-user/200507.mbox/%3C42CA4CC9.4040603@discursive.com%3E
CC-MAIN-2017-09
refinedweb
459
50.63
Can I 'ignore' query string variables before pulling matching objects from the cache, but not actually remove them from the URL to the end-user? For example, all the marketing utm_source, utm_campaign, utm_* values don't change the content of the page, they just vary a lot from campaign to campaign and are used by all of our client-side tracking. So this also means that the URL can't change on the client side, but it should somehow be 'normalized' in the cache. Essentially I want all of these... ... to all access HIT the cache for However, this URL would cause a MISS (because the param is not a utm_* param) Would trigger the cache for Also, keeping in mind that the URL the user sees must remain the same, I can't redirect to something without params or any kind of solution like that. So I'll add a disclaimer that this regex probably isn't perfect, but it should work pretty well: sub vcl_recv { set req.url = regsuball(req.url, "\?(utm_[^=&]*=[^&=]*&?)+", "?"); set req.url = regsuball(req.url, "&(utm_[^=&]*=[^&=]*(&|$))+", "\2"); set req.url = regsub(req.url, "\?$", ""); return (pass); } This should remove any query parameters starting with utm_. I used three regexs to make it clearer and easier to read. The first regsuball removes any utm_ parameters at the beginning of the query string. It looks for one or more utm_ parameters immediately after the ?. The second regsuball removes any utm_ parameters that aren't at the beginning of the query string. The third regex will cleanup the URL by removing the ? if there are no query parameters left after we are done removing utm_ parameters. Both regexes need to be in ()+ as this will match one or more consecutive utm_ parameters (they wouldn't be matched otherwise). Example results: Source URL: /?utm_track=1&utm_test2=hey&test=utm_blah&utm_source=google&variation=5&utm_query=abc&utm_test7=yes Maps to: /?test=utm_blah&variation=5 Source URL: /?variation=5&utm_test1=abc&utm_test2=def&blah=1 Maps to: /?variation=5&blah=1
https://www.codesd.com/item/ignore-utm-values-with-varnish.html
CC-MAIN-2019-13
refinedweb
339
55.74
Communities Control-M not able to pass values to WebserviceArjun Thakur Jun 1, 2012 2:35 AM Hi All, I created below webservice usign Netbeans. ************************************************************* @WebService() public class AddNumbersImpl { @WebMethod(operationName = "getName") public String getName(@WebParam(name = "FName") String FName, @WebParam(name = "LName") String LName) { String FullName = FName + LName; return FullName; } } ************************************************************* and configured the same in BMC Control-M In above screen I passed values for variable "FName" as "qq" and for variable "LName" as "dd" Query: While executing the control-m job, Webmethod "getName" is not receiving any values from Control-M. But after execution of Webmethod, Control-M correctly fetch the "Return" value and displys it into the sysout. Regards, Arjun. 1. Re: Control-M not able to pass values to WebserviceManoj Belekar Jul 27, 2012 2:54 AM (in response to Arjun Thakur) Hi Arjun, Please, follow the below steps to resolve the issue: 1) Apply fix pack PACOB.6.3.02.110 2) Create an empty configuration file in the <Agent>/cm/WS/java folder and name it "ctm.bpi.cm.ws.properties". Insert one line in this file exactly as follows: USE_QUALIFIED_PREFIX Y 3)This approach requires Tools->Shutdown CM of the WS CM to update the CM runtime. Hope this helps. 2. Re: Control-M not able to pass values to WebserviceArjun Thakur Jul 30, 2012 2:04 AM (in response to Manoj Belekar) Hi Manoj, Thanks for your valuable help. This resolves the issue.
https://communities.bmc.com/thread/67367?start=0&tstart=0
CC-MAIN-2019-18
refinedweb
241
55.78
A heuristic that stores the initial solution of the NLP. More... #include <BonInitHeuristic.hpp> A heuristic that stores the initial solution of the NLP. This is computed before Cbc is started, and in this way we can tell Cbc about this. Definition at line 24 of file BonInitHeuristic.hpp. Copy constructor. Destructor. Default constructor. Clone. Assignment operator. Definition at line 42 of file BonInitHeuristic.hpp. objective function value from initial solve Definition at line 56 of file BonInitHeuristic.hpp. point from initial solve Definition at line 59 of file BonInitHeuristic.hpp. Size of array sol. Definition at line 62 of file BonInitHeuristic.hpp.
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_init_heuristic.html
crawl-003
refinedweb
103
60.51
Extending Phooey From HaskellWiki Revision as of 07:27, 18 August 2007 A note on how Phooey is being extended and utilised to develop a non-trivial application HGene. The key problem being addressed is the 'wiring problem' - How to wire up all the buttons, menus and display widgets so that it all works smoothly and is obvious? First a screen shot: To achieve this the following was done. A number of widgets were added: recordEditor :: Sourceable a => Source (Maybe a) -> UI (Source (Maybe a)) listDisplay :: Sourceable a => Source [a] -> UI (ListSources a) treeDisplay :: Sourceable a => Source (Tree a) -> UI (TreeSources a) All of these are dynamic and take a source which provides updates on the information that they display. Sourceable is a new class that provides a number of utility functions that are used by recordEditor primarily. ListSources and TreeSources are a way of grouping sources. data TreeSources a = TreeSources { treeSelect :: Source (Maybe a), treeCopy :: Source (Maybe a) } data ListSources a = ListSources { listSelect :: Source (Maybe a), listCopy :: Source (Maybe a) } The select source is for selections in the widget, and the copy source is hooked into a popup menu action on the widget. This is wired to copy the current item into the clipboard. An additional non-visible 'widget' was created to provide persistence. The input source is updated 'a' values. database :: Sourceable a => (Map Int a) -> Source (Maybe a) -> UI (Source (Map Int a)) The following is wired together as follows: hgene :: Map Int Person -> UI (Source ()) hgene pd = mdo db <- database pd pers (pers,clipboard) <- fromLeft $ do aTree <- title "Family Tree" $ treeDisplay (liftA asTree db) fromTop $ do pers <- title "Person" $ recordEditor (aTree!treeSelect) aList <- title "Children" $ listDisplay ( liftA2 (\d p -> maybe [] (childrenOf d) p) db (aTree!treeSelect) ) pasted <- liftIO ( (aTree!treeCopy) `orSource` (aList!listCopy)) clipboard <- title "Clipboard" $ showDisplay pasted return (pers,clipboard) return clipboard The input map is a map of Person records read from a file. The above could be neater except that mdo and the fromLeft/fromTop functions don't play together nicely. The '!' construct was borrowed from PropLang. 'orSource' is a function to OR two sources together. I hope it is clear from the above code what the wiring is. If it isn't let me know about this and other suggestions on the talk page Next steps are: - Add more editing actions - add delete and insert buttons to the recordEditor - Add a 'write protect' check box which if checked will prevent changes to the data but also grey out the menu buttons which trigger updates. - Add a way of allowing users of listDisplay and treeDisplay to specify additional buttons for the popup menu. - How might undo/redo be handled?
http://www.haskell.org/haskellwiki/index.php?title=Extending_Phooey&diff=prev&oldid=15126
CC-MAIN-2014-35
refinedweb
446
59.94
#include <MUserEventMessage.h> This class is used to register user-defined event types, register callbacks with the user-defined event types, and to post user-defined messages. The registerUserEvent and deregisterUserEvent methods allow user event types to be created and destroyed. User events are identified by a unique string identifier. The addCallback method registers a function that will be executed whenever the specified message occurs. An id is returned and is used to remove the callback. The postUserEvent notifies all registered callbacks of the occurence of the user-defined event. To remove a callback use MMessage::removeCallback. All callbacks that are registered by a plug-in must be removed by that plug-in when it is unloaded. Failure to do so will result in a fatal error. Adds a new event type with the given string identifier. The string identifier can then be used in all other MUserEventMessage methods to operate on the new event type. Checks if an event type exists with the given event name. Removes the event type with the given event name. If callbacks have been registered with this event type, they will become invalid after a successful call to this method. This method registers a callback for user-defined messages. The parameter clientData will be passed to callbacks registered for this event whenever the event is triggered. To override the data that is passed to the callback whenever the event is posted, you can supply a clientData pointer to postUserEvent(). Notifies all callbacks attached to the given event type of the occurence of the event. If clientData is specified, this data will be passed to all callbacks that receive the event. If clientData is NULL (the default), the clientData registered with addUserEventCallback will be passed to the callbacks.
http://download.autodesk.com/us/maya/2009help/api/class_m_user_event_message.html
CC-MAIN-2016-44
refinedweb
293
65.52
Details - Skillsc#, c++, embedded Joined devRant on 9/15/2017 - - Why the fuck do coding blogs insist on using themes with a 600px content column, then use code samples that are 3x wider than that? The whole reason I have a widescreen monitor is to not _have_ to scroll, jackass!1 - I wish the U.S. didn't "pasteurize" eggs, because without that, eggs are shelf-stable, which would mean I could keep them in a basket on my countertop and label it "npm"2 - Today, I learned the shortest command which will determine if a ping from your machine can reach the Internet: ping 1.1 This parses as 1.0.0.1, which thanks to Cloudflare, is now the IP address of an Internet-facing machine which responds to ICMP pings. Oh, you can also use this trick to parse 10.0.0.x from `10.x` or 127.0.0.1 from `127.1`. It's just like IPv6's :: notation, except less explicit.11 - Oh JavaScript... can you seriously not even increment the exponent of a float without barfing? *siiiiiiiiiiiiiiiigh*15 - -. - > be me, working on small addition to enormous feature branch > build system in flux due to reorganization started a month ago, not quite solid yet, but mostly works > f_branch gets master merged into it sometime last week > bossman makes "minor" change to build system and edits master to match > doesn't merge changes into f_branch > bossman goes on holiday for a week > no permission to merge master changes into f_branch > linter barfs > npm barfs > build server barfs > mfw I can't even deploy to our testing environment5 - - Opus is an amazing codec, but did Soundcloud really have to switch to it with a bitrate of 64kbps? even 80kbps would have been worlds better. Bandwidth isn't _that_ expensive, even post-neutrality.4 - the people in Ops all have space heaters, but we don't have the power Seriously though, building management needs to turn up the heat by like 3°C. And install new breakers. And fix the shitty wiring. - Just created a CLUSTERED INDEX -- knowingly and intentionally -- for the first time. I feel like a frickin' sorcerer. -. - - You know what's more fun than debugging a SQL stored prodecure? Debugging a SP which CATCHes all errors and instead returns an error code. Because exceptions are scary... -. - Fuck off OneDriveSetup.exe, nobody asked you to install anything. This "i7" is only dual-core, and I need both of them to run my code, kthx. - It looks like Microsoft are back to their old tricks, specifically the DirectX 9 naming scheme. Naming releases after Northern Hemisphere seasons and repeating words never gets old and/or confusing! Who's looking forward the "Winter (2018) Creator's Update"?6 - Using ReSharper is like becoming enlightened, or de-brainwashing oneself to see true reality. Of my entire dev team, I'm the only one who can see the fnords! Unused identifiers, badly sorted modifiers, unused property setters, redundant `this`/namespace, redundant casts... Surely if they could see them too, such code would not survive! - Lodash, Rimraf, Grunt, Gulp... I'm still not convinced that our frontend guy isn't just playing Pokémon Go all day3 - Goddamn, Windows' idea of symlinks is completely broken. It's like they faked it at the UI level, but if your build process wants to copy the file? Too bad, it's not real so you can't copy it.3 - Javascript is a lot easier when you remember that there's a convenient REPL that is never further than an F12 away2
https://devrant.com/users/tullo-x86
CC-MAIN-2018-47
refinedweb
599
65.62
import org.netbeans.modules.form.palette.PaletteItem;25 26 /**27 * Cookie allowing drag and drop of nodes to form module.28 *29 * @author Jan Stola, Tomas Pavek30 */31 public interface NewComponentDrop extends Node.Cookie {32 33 /**34 * Describes the primary component that should be added.35 */36 PaletteItem getPaletteItem();37 38 /**39 * Callback method that notifies about the added component. You should40 * set properties of the added component or add other beans to the model41 * in this method.42 *43 * @param model model of the form.44 * @param componentId ID of the newly added component.45 */46 void componentAdded(FormModel model, String componentId);47 48 }49 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/form/NewComponentDrop.java.htm
CC-MAIN-2016-44
refinedweb
121
51.04
[SOLVED] QSystemScreenSaver I'm trying to stop the screensaver when my app is in the foreground. () I followed the code snippet from the link above, but still, I'm getting compile errors 'QSystemScreenSaver' does not name a type. I use Qt SDK 1.1 beta on Windows 7. The compiler can't find QSystemScreenSaver class declaration. Make sure your code has include line like this before using QSystemScreenSaver class: @#include <QSystemScreenSaver>@ You also need to specify QtMobility namespace, that blog article does not mention it for some reason, so make sure there is a macro line like this after the include line: @QTM_USE_NAMESPACE@ Make also sure you have lines like this in your .pro file @CONFIG += mobility MOBILITY += systeminfo@ tnx, it works now.
https://forum.qt.io/topic/4103/solved-qsystemscreensaver
CC-MAIN-2018-47
refinedweb
123
62.68
Ok, I'm no noob to writing client/server applications or API's to interact with them. However, I have usually done this type of programming in C or PHP and rolled my own *EVERYTHING* to make it work. This is obviously very time consuming, and since I've been writing more and more API's lately I decided to give 'something else' a shot. I've been playing around with Python a lot lately, and I really like it's flexibility and ease of use, so I decided to see what I could find on writing my new API in Python. After some careful examination, it probably wouldn't be too terribly hard to roll your own API in Python, but I wanted to see what else was out there - after all, I've already "been there, done that". So after spending some time searching around the net, and researching various options available to me, I came up with the perfect solution... Well, in my mind at least - it looked like Django coupled with Piston was the quickly becoming the defacto standard for developing new API's in Python and it has a lot of great features right out of the box. Piston is described as a mini framework for developing RESTful API's, here are some features out of the box: * Ties into Django's internal mechanisms. * Supports OAuth out of the box (as well as Basic/Digest or custom auth.) * Doesn't require tying to models, allowing arbitrary resources. * Speaks JSON, YAML, Python Pickle & XML (and HATEOAS.) * Ships with a convenient reusable library in Python * Respects and encourages proper use of HTTP (status codes, ...) * Has built in (optional) form validation (via Django), throttling, etc. * Supports streaming, with a small memory footprint. * Stays out of your way. So the next step was to try a simple API myself, but being unfamiliar with Django and even less familiar with Piston I needed an example, or a tutorial, or something to get me started. Sure, there was SOME documentation, but even the example that was labeled "A fully functioning example" wasn't complete (at least from what I could tell, it was at least missing the data model). So that's where this article comes in... I am going to create a fully functioning (albeit extremely simple) API so you can at least get the basics... Maybe in some future articles I will expand on this more to add data models, authorization, etc. First you need to make sure you have Python, Django, and Piston installed properly. I am not going to cover that here as there seems to be a lot of debate on what the "proper" way to do this is. Seems like the most current recommendations are to use something called 'VirtualEnv' to setup your Python development library, but I'm not choosing sides. :) Now make sure you are in your code directory (where-ever you normally keep your development projects), and create your Django project: % django-admin startproject calc This will create a a directory that looks like the following: calc/ __init__.py manage.py settings.py urls.py This is your basic Django project directory structure. Now we need to create our application. Most discussions say that your API "application" should be named "api" and be kept inside of your Django project directory. I tend to agree with this, so go into the 'calc' directory we created above with the startproject command and create an application named 'api'. % cd calc % ./manage.py startapp api Your entire project directory will now look like this: calc/ __init__.py api/ __init__.py models.py tests.py views.py manage.py settings.py urls.py Now we need to make sure that our API is accessible, we do this using urlpatterns in the calc/urls.py file (similar to Ruby routes). Edit calc/urls.py and make it look like the following: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^api/', include('calc.api.urls')), ) What this is saying is anytime we receive a request for a URL that begins with api/ we need to include the python module calc.api.urls (which is found in calc/api/urls.py). So let's edit calc/api/urls.py now and make it look like the following: from django.conf.urls.defaults import * from piston.resource import Resource from api.handlers import CalcHandler class CsrfExemptResource( Resource ): def __init__( self, handler, authentication = None ): super( CsrfExemptResource, self ).__init__( handler, authentication ) self.csrf_exempt = getattr( self.handler, 'csrf_exempt', True ) calc_resource = CsrfExemptResource( CalcHandler ) urlpatterns = patterns( '', url( r'^calc/(?P<expression>.*)$', calc_resource ) ) This turns out to be a pretty important piece of code. Apparently in Django (v1.2+?) they implemented some extra security to prevent against Cross Site Request Forgery (CSRF), which is great for most situations - but personally I don't believe it applies to API's as MOST requests are going to be from other sites... I mean, that's the whole idea, right? ;) So essentially what this does is wrap the BaseHandler resource in our CsrfExemptResource and then disable the CSRF checking by setting csrf_exempt = True. Technically this is something that should probably be in the base Piston code, but as of the time of this writing it was not. If you don't use the chunk of code above, your will get a 403 error when you attempt to update something (Ie: PUT, POST, DELETE) using your new API. The urlpatterns here are what get used after the other url pattern is already stripped off, so even though the pattern matches against the beginning of the line (^calc/), since we've already parsed ^api/ to get here, the full path is actually going to be api/calc/<expression>. A call to that URL will be forwarded along to our 'calc_handler' which is a Piston Resource 'CalcHandler' wrapped in our CsrfExemptResource class. Clear as mud, right? ;) Now we need to write our CalcHandler class, we will do this in handlers.py file (which doesn't exist yet) in our api directory. Let's edit calc/api/handlers.py and make it look like the following: from piston.handler import BaseHandler class CalcHandler( BaseHandler ): def read( self, request, expression ): return eval( expression ) I'm not even going to go into the security implications of this, because that's not what this article is about, but suffice it to say YOU SHOULD NEVER RUN THIS ON A LIVE SERVER! The 'eval( expression )' is just like handing hackers the key to your new Ferrari, it's a wide-open door into your server. The "read" method is what will get called when we attempt to READ from this handler (Ie: a GET request). Now our directory structure should look like this: calc/ __init__.py api/ __init__.py handlers.py models.py tests.py urls.py views.py manage.py settings.py urls.py And the only files we needed to touch were: calc/urls.py calc/api/handlers.py calc/api/urls.py That's it! Now let's fire up Django's built in web server to test out our new API. From the calc/ directory: % ./manage.py runserver localhost:8000 & That will start a very simple webserver (designed for testing, not for production use) on your localhost at port 8000. Now let's use curl to make a call to our new API: % curl 3 What this did is connect to our localhost at port 8000 and request the URL api/calc/1+2 from our API. Django saw the api/... and based on our calc/urls.py knew that it had to include our urlpatterns from calc/api/urls.py. Once it did that, it matched the 2nd part of our URL 'calc' and then passed the rest of our URL '1+2' along to our CalcHandler which did an eval on it and returned the result (which was 3 in this case). So there you go, a VERY basic API setup in Python with Django and Piston. I know it's not very impressive, that'll come later - but even getting this far without any documentation proved to be extremely difficult, so I hope this helps the next person. The one last thing you should do is add your new API application to the list of INSTALED_APPS in the calc/settings.py file. It's not necessary in this example, but once you get into models it will make some things happen automagically. For this example, I would add 'calc.api' to the end of INSTALLED_APPS. Nice and Simple I like the simplistic idea of this example. But I was wondering if there was a way you could demonstrate another example with just plain old Django without adding another framework. That would be very awesome. A little help!!! Would it be possible to have the same example but considering the new Django version 1.5? because when I created a project the folders and file organization is different. There are just small differences but still I can not make the example run =( I will really appreciate your help!!! Jose Excellent! Thanks for writing a simple, easy to understand example for api creation, this was very helpful! Hooray! It is very refreshing to come across a tutorial which actually works. Thanks! Really very helpful tutorial Really very helpful tutorial Thank you for your article Thank you very much for your article which has made me know the basic mechanism of Piston. :-) Thanks man This is very helpful. Thanks man CSRF The CSRF thing was killing me because no amount of debug would tell me what was happening. Thanks! Very awesome! Thank you very much! This is such a useful tutorial :) Extremely helpful Extremely helpful I can't thank you enough Do you have any example of Do you have any example of Piston using authentication? Nice, simple walkthrough Thanks for posting your experience. The walkthrough was simple and easy to follow. It definitly saved me some time! thanks just what I was looking for to help me get going with Piston. well written, concise! Simple and Effective I have been searching for a while to a simple tutorial for implementing a Web Service in Django. They (Piston guys), should include this example in theirs documentation. Thank you very much.
http://www.robertshady.com/content/creating-very-basic-api-using-python-django-and-piston
CC-MAIN-2015-40
refinedweb
1,717
63.9
OpenGL Programming/Modern OpenGL Introduction Contents, GLUT and GLEW are ready to use. Check the installation pages to prepare everything for your system. To situate these libraries in the OpenGL stack, check APIs, Libraries and acronyms. Note: we chose GLUT because it is as minimal as a portable layer can get. We won't use advanced features, and almost all of our code will be plain OpenGL, so you'll have no troubles to move to another, more general library such as GLFW, SDL and SFML when you'll create your new game or application. We may write specific tutorials to cover how to switch to these libraries.] It is very easy to configure 'make' for our example. Write this in a 'Makefile' file: LDLIBS=-lglut -lGLEW -lGL all: triangle To compile your application, type in a terminal: make A little more fancy Makefile looks like this: CC=g++ LDLIBS=-lglut -lGLE. A Makefile for MacOS. Initialization[edit] Let's create a file triangle.c: /* Using the standard output for fprintf */ #include <stdio.h> #include <stdlib.h> /* Use glew.h instead of gl.h to get all the GL prototypes declared */ #include <GL/glew.h> /* Using the GLUT library for the base windowing setup */ #include <GL/freeglut.h> /* ADD GLOBAL VARIABLES HERE LATER */ int init_resources(void) { /* FILLED IN LATER */ return 1; } void onDisplay() { /* FILLED IN LATER */ } void free_resources() { /* FILLED IN LATER */ } int main(int argc, char* argv[]) { /* Glut-related initialising functions */ glutInit(&argc, argv); glutInitContextVersion(2,0); glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(640, 480); glutCreateWindow("My First Triangle"); /* Extension wrangler initialising */ GLenum glew_status = glewInit(); if (glew_status != GLEW_OK) { fprintf(stderr, "Error: %s\n", glewGetErrorString(glew_status)); return EXIT_FAILURE; } /* When all init functions run without errors, the program can initialise the resources */ if (init_resources()) { /* We can display it if everything goes OK */ glutDisplayFunc(onDisplay); glutMainLoop(); } /* If the program exits in the usual way, free resources and exit with a success */ free_resources(); return EXIT_SUCCESS; } In init_resources, we'll create our GLSL program. In onDisplay, we'll draw the triangle. In free_resources, we'll destroy the GLSL program. Vertex array[edit] Our first triangle will be displayed in 2D - is: /* Function: init_resources Receives: void Returns: int This function creates all GLSL related stuff explained in this example. Returns 1 when all is ok, 0 with a displayed error */ int init_resources(void) { GLint compile_ok = GL_FALSE, link_ok = GL_FALSE; GLuint vs = glCreateShader(GL_VERTEX_SHADER); const char *vs_source = #ifdef GL_ES_VERSION_2_0 "#version 100\n" // OpenGL ES 2.0 #else "#version 120\n" // OpenGL 2.1 #endif "attribute vec2 coord2d; " "void main(void) { " " gl_Position = vec4(coord2d, 0.0, 1.0); " "}"; glShaderSource(vs, 1, &vs_source, NULL); glCompileShader(vs); glGetShaderiv(vs, GL_COMPILE_STATUS, &compile_ok); if (0 == compile_ok) { fprintf(stderr, "Error in vertex shader\n"); return 0; } 120 ) { fprintf(stderr, "Error in fragment shader\n"); return 0; }) { fprintf(stderr, "glLinkProgram:"); return 0; } Pass) { fprintf(stderr, "Could not bind attribute %s\n", attribute_name); return 0; } return 1; } Now we can pass our triangle vertices to the vertex shader. Let's write our onDisplay procedure. Each section is explained in the comments: void onDisplay() { /*, }; /* triangle_vertices // pointer to the C array ); /* Push each element in buffer_vertices to the vertex shader */ glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(attribute_coord2d); /* Display the result */ glutSwapBuffers(); } Cf. "OpenGL ES Shading Language 1.0.17 Specification". Khronos.org. 2009-05-12.. Retrieved 2011-09-10. - The OpenGL ES Shading Language (also known as GLSL ES or ESSL) is based on the OpenGL Shading Language (GLSL) version 1.20
http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Introduction
CC-MAIN-2014-42
refinedweb
572
53.81
Package: e2fsprogs Version: 1.42~WIP-2011-10-05-1 Severity: important Tags: patch User: debian-hurd@lists.debian.org Usertags: hurd Hello, The patch inlined below fixes the remaining FTBFS problems for GNU/Hurd. It probably fixes the build problems for GNU/kFreeBSD too. The QIF_* stuff is only used for quota_v2 and kfree seems to have version quota_v1 as their quota.h does not have these defined. For kfreebsd-* quota.h is at ufs/ not sys/ so the test below also applies to this architecture. >From the latest build log quotactl is found, and that function is defined in quota.h. Thanks! diff -ur e2fsprogs-1.42~WIP-2011-10-05/lib/quota/quotaio.h e2fsprogs-1.42~WIP-2011-10-05.modified/lib/quota/quotaio.h --- e2fsprogs-1.42~WIP-2011-10-05/lib/quota/quotaio.h 2011-10-05 01:18:15.000000000 +0200 +++ e2fsprogs-1.42~WIP-2011-10-05.modified/lib/quota/quotaio.h 2011-10-06 09:45:45.000000000 +0200 @@ -14,6 +14,7 @@ #include "ext2fs/ext2fs.h" #include "quota.h" #include "dqblk_v2.h" +#include "config.h" /* * Definitions for disk quotas imposed on the average user @@ -103,10 +104,14 @@ /* Flags for commit function (have effect only when quota in kernel is * turned on) */ +#ifdef HAVE_SYS_QUOTA_H #define COMMIT_USAGE QIF_USAGE #define COMMIT_LIMITS QIF_LIMITS #define COMMIT_TIMES QIF_TIMES #define COMMIT_ALL (COMMIT_USAGE | COMMIT_LIMITS | COMMIT_TIMES) +#else +#define COMMIT_ALL 0 +#endif /* Structure of quotafile operations */ struct quotafile_ops {
http://lists.debian.org/debian-hurd/2011/10/msg00020.html
CC-MAIN-2013-20
refinedweb
239
53.78
linc-thra Junior Member Last Activity: 3rd June 2017 06:47 PM Most Thanked Thanks Post Summary 1 Is there a reason why you are trying to do this and not use the Plex App? Is it just a coding experiment or is there some reason why you want to bypass the Plex app? There is a reason. Now that this is working, I have tied it to my Google home ... 1 My bad, that first code line should be import pychromecast.controllers.plex as px if you edit the existing plex.py file, I think. I had made my own at plexapi.py so as not to lose the original. 1 How did you go about sending the commands to the chromecast? I've spent most of the day trying to figure out pychromecast, but I'm not having much luck. I just cannot figure out how to expand the namespace to add a new plex module. I feel like if ...
https://forum.xda-developers.com/member.php?s=8e16cab77baa24f5520ba455c19855c2&u=7929505
CC-MAIN-2018-05
refinedweb
162
84.88
I have an class that is an enumerated type: public class MyType implements Serializable { public final static MyType TYPE1 = new MyType(); public final static MyType TYPE2 = new MyType(); ... } When I use this type in a business object that is returned from a session bean method, the type is broken. I understand why, because I'm using references which are equal only in the same process. So my question is how can I implement an enumerated type that will work with EJB (for remote calls)? Is there a pattern for this? Michael Discussions EJB programming & troubleshooting: Using enumerated types with EJBs Using enumerated types with EJBs (2 messages) - Posted by: Michael Mattox - Posted on: March 11 2002 08:37 EST Threaded Messages (2) - Using enumerated types with EJBs by Kapil Israni on March 11 2002 13:43 EST - Using enumerated types with EJBs by Tom Davies on March 11 2002 18:33 EST Using enumerated types with EJBs[ Go to top ] Well u can pass constant string as contstructor parameters for MyType object. And then override the equals method which compares the equality of the string. - Posted by: Kapil Israni - Posted on: March 11 2002 13:43 EST - in response to Michael Mattox Using enumerated types with EJBs[ Go to top ] You need to use the canonical Java enumerated type pattern, which works with serialisation. - Posted by: Tom Davies - Posted on: March 11 2002 18:33 EST - in response to Michael Mattox See Bloch's book 'Effective Java' Tom
http://www.theserverside.com/discussions/thread.tss?thread_id=12394
CC-MAIN-2014-15
refinedweb
248
52.73
#include <IpIpoptAlg.hpp> Inheritance diagram for Ipopt::IpoptAlgorithm: Main Ipopt algorithm class, contains the main optimize method, handles the execution of the optimization. The constructor initializes the data structures through the nlp, and the Optimize method then assumes that everything is initialized and ready to go. After an optimization is complete, the user can access the solution through the passed in ip_data structure. Multiple calls to the Optimize method are allowed as long as the structure of the problem remains the same (i.e. starting point or nlp parameter changes only). Definition at line 45 of file IpIpoptAlg.hpp. Constructor. (The IpoptAlgorithm uses smart pointers for these passed-in pieces to make sure that a user of IpoptAlgoroithm cannot pass in an object created on the stack!) Default destructor. Default Constructor. Copy Constructor. overloaded from AlgorithmStrategyObject Implements Ipopt::AlgorithmStrategyObject. Main solve method. Methods for IpoptType. Overloaded Equals Operator. Method for updating the current Hessian. This can either just evaluate the exact Hessian (based on the current iterate), or perform a quasi-Newton update. Method to update the barrier parameter. Returns false, if the algorithm can't continue with the regular procedure and needs to revert to a fallback mechanism in the line search (such as restoration phase) Method to setup the call to the PDSystemSolver. Returns false, if the algorithm can't continue with the regular procedure and needs to revert to a fallback mechanism in the line search (such as restoration phase) Method computing the new iterate (usually vialine search). The acceptable point is the one in trial after return. Do all the output for one iteration. Sets up initial values for the iterates, Corrects the initial values for x and s (force in bounds). Print the problem size statistics. Compute the Lagrangian multipliers for a feasibility problem. Method for ensuring that the trial multipliers are not too far from the primal estime. If a correction is made, new_trial_z is a pointer to the corrected multiplier, and the return value of this method give the magnitutde of the largest correction that we done. If no correction was made, new_trial_z is just a pointer to trial_z, and the return value is zero. Definition at line 102 of file IpIpoptAlg.hpp. Definition at line 103 of file IpIpoptAlg.hpp. Definition at line 104 of file IpIpoptAlg.hpp. Definition at line 105 of file IpIpoptAlg.hpp. Definition at line 106 of file IpIpoptAlg.hpp. Definition at line 107 of file IpIpoptAlg.hpp. Definition at line 108 of file IpIpoptAlg.hpp. The multipler calculator (for y_c and y_d) has to be set only if option recalc_y is set to true. Definition at line 111 of file IpIpoptAlg.hpp. Flag indicating if the statistic should not be printed. Definition at line 161 of file IpIpoptAlg.hpp. safeguard factor for bound multipliers. If value >= 1, then the dual variables will never deviate from the primal estimate by more than the factors kappa_sigma and 1./kappa_sigma. Definition at line 170 of file IpIpoptAlg.hpp. Flag indicating whether the y multipliers should be recalculated with the eq_mutliplier_calculator object for each new point. Definition at line 174 of file IpIpoptAlg.hpp. Feasibility threshold for recalc_y. Definition at line 176 of file IpIpoptAlg.hpp. Flag indicating if we want to do Mehrotras's algorithm. This means that a number of options are ignored, or have to be set (or are automatically set) to certain values. Definition at line 180 of file IpIpoptAlg.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_ipopt_algorithm.html
crawl-003
refinedweb
574
50.63
30, 2007 09:00 AMPiers Cawley writes about a potential problem he discovered in a blog article about Lazily Initialized Attributes. The problematic code:". nil. Here an example to illustrate: a = falseWhat's the result of this? Since a was initialized in the first line, the second line should not have had any effect. However, executing that code reveals that a ||= "Ruby" anow has the value "Ruby", instead of false. nilchecks in Ruby: if nameIn Ruby, a puts name.capitalize end nilis interpreted as boolean false, so the code in the ifclause will only run if the nameis not nil. nilor false. In that case, an access would reset the variable to it's default value. def contentThis only initializes the variable if the variable hasn't been defined yet. unless instance_variable_defined? :@content @content = [] end return @content end ||=wasn't the right solution and instead the initialization code is supposed to check if the variable had been defined yet. Download the Free Adobe® Flex® Builder 3 Trial Effective Management of Static Analysis Vulnerabilities and Defects Agile Development: A Manager's Roadmap for Success Ensuring Code Quality in Multi-threaded Applications The Agile Business Analyst: Skills and Techniques needed for Agile - No signal. No noise.. The original article on "Lazily Initialized Attributes" ( ) The trickiness in Jay's case is trying to lazy initialize a boolean to true. That's the only place one could actually run into a problem, but a simple nil check would work there. I still think the standard @x ||= default works except for cases where you want to initialize a boolean. In any case, lazy initialization should probably only used for expensive operations, not for setting simple defaults. I think the distinction between nil? and empty? or blank? (a Rails addition) is more interesting area to explore. I think using or implementing those methods makes things more clear than setting something to nil and expecting that to mean empty, as opposed to uninitialized.. I have to admit to using ||= absentmindedly. Thanks for pointing out the gap in this idiom. I'm sure it was only a matter of time before I was bit by
http://www.infoq.com/news/2007/07/ruby-gotcha
crawl-002
refinedweb
357
54.73
I've just started using pyserial as I will eventually need to read/save information coming from a particular port. Using the following code I am merely printing the port used and then trying to write and then read in some text ("hello"). The port is printing fine, but the output of my string is coming out as 5. Any idea why this is? import serial import sys from time import sleep try: ser = serial.Serial('\\.\COM8', 9600,timeout=None, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS) except: sys.exit("Error connecting device") print ser.portstr x = ser.write("hello") print x ser.close() >>> \.\COM8 5 >>> you can do it this way to test it. Firstly create a pair of ports in manage ports First port: COM199 Second Port: COM188 Click Add Pair On one console/script do the below steps: >>> import serial >>> ser = serial.Serial('COM196', 9600,timeout=None, parity=serial.PARITY_NONE, stopbits=serial.S BITS_ONE, bytesize=serial.EIGHTBITS) >>> print ser.portstr COM196 >>> x = ser.read(5) # It will be waiting till it receives data >>> print x hello On the other console, perform below steps: >>> import serial >>> s = serial.Serial('COM188') >>> s.write("hello") 5L you can test it this way (or) by creating python programs for each of the ports
https://codedump.io/share/gmX8JocYRnfD/1/pythonpyserial-reading-incoming-information-from-port
CC-MAIN-2017-26
refinedweb
214
52.97
This chapter describes how to integrate with unmanaged code in native DLLs. Unless otherwise stated, the types mentioned in this chapter exist in either the System or the System.Runtime.InteropServices namespace. P/Invoke, short for Platform Invocation Services, allows you to access functions, structs, and callbacks in unmanaged DLLs. For example, consider the MessageBox function, defined in the Windows DLL user32.dll as follows: int MessageBox (HWND hWnd, LPCTSTR lpText, LPCTSTR lpCation, UINT uType); You can call this function directly by declaring a static method of the same name, applying the extern keyword, and adding the DllImport attribute: using System; using System.Runtime.InteropServices; class MsgBoxTest { [DllImport("user32.dll")] static extern int MessageBox (IntPtr hWnd, string text, string caption, int type); public static void Main( ) { MessageBox (IntPtr.Zero, "Please do not press this again.", "Attention", 0); } } The MessageBox classes in the System.Windows and System.Windows.Forms namespaces themselves call similar unmanaged methods. The CLR includes a marshaler that knows how to convert parameters and return values between .NET types and unmanaged types. In this example, the int parameters translate directly to 4-byte integers that the function expects, and the string parameters are converted into null-terminated arrays of 2-byte Unicode characters. IntPtr is a struct designed to encapsulate an unmanaged handle, ... No credit card required
https://www.oreilly.com/library/view/c-30-in/9780596527570/ch22.html
CC-MAIN-2019-26
refinedweb
219
50.02
How to Make a Simple Comparisons in C Programming You make comparisons all the time, so you shouldn’t avoid comparisons in C programming. What will you wear in the morning? Should you avoid Bill’s office because the receptionist says he’s “testy” today? And how much longer will you put off going to the dentist? The computer is no different, albeit the comparisons it makes use values, not abstracts. A SIMPLE COMPARISON #include <stdio.h> int main() { int a,b; a = 6; b = a - 2; if( a > b) { printf("%d is greater than %dn",a,b); } return(0); } Exercise 1: Create a new project using the source code shown in A Simple Comparison. Build and run. Here’s the output you should see: 6 is greater than 4 Fast and smart, that’s what a computer is. Here’s how the code works: Line 5 declares two integer variables: a and b. The variables are assigned values in Lines 7 and 8, with the value of variable b being calculated to be 2 less than variable a. Line 10 makes a comparison: if( a > b) Programmers read this line as, “If a is greater than b.” Or when they’re teaching the C language, they say, “If variable a is greater than variable b.” And, no, they don’t read the parentheses. Lines 11 through 13 belong to the if statement. The meat in the sandwich is Line 12; the braces (curly brackets) don’t play a decision-making role, other than hugging the statement at Line 12. If the comparison in Line 10 is true, the statement in Line 12 is executed. Otherwise, all statements between the braces are skipped. Exercise 2: Edit the source code from A Simple Comparison so that addition instead of subtraction is performed in Line 8. Can you explain the program’s output?
https://www.dummies.com/programming/c/how-to-make-a-simple-comparisons-in-c-programming/
CC-MAIN-2018-47
refinedweb
311
73.47
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » I made a new language for programming the Atmega I'm been noticing how tough programming can be for people who are new to it, perhaps forced into it just because they like electronics and just want to use microcontrollers. I see how bad "C" can be for that type of person. So I made up a language. The compiler here is very rough around the edges. It's written for .NET Framework 4 so only works on Windows for now. Mono on Linux maybe someday, but only if people seem interested. Install .NET, download the compiler, put this "sample.oq" file in the same folder and run "oq sample.oq" and it should make "sample.c" out of it. So it's more of a translator-to-C than a full compiler, but I think it still counts. Here's the sample.oq program: uses: Adc # include Adc.oq for ADC routines HD44780 as Lcd # include HD44780.oq for LCD routines Usart # include Usart.oq for serial I/O routines pins: Lcd PD5,PD4,PD3,PD2 data # define the pins for LCD data, enable, and RS Lcd PD6 enable Lcd PD7 RS in PB1 button # define the pin for our language button sram: uint2 language = 0 # define two bits in SRAM, set to 0 (English) when [button] falls: # define what happens when debounced button is pressed language++ # increment "language" value (0, 1, 2, 3, 0, ...) Lcd.Clear # clear the LCD streams: send UTF_8 Usart.WriteByte # specify a UTF-8 output stream for the serial port sum = Vcc * 64 # define "sum" as large enough for 64 voltage samples loop: # start the main loop sum = 0mV # reset sum to zero volts repeat 64: # repeat 64 times (for 3 extra bits precision) sum = sum + Adc.GetVoltage() # get the ADC voltage and add it to the sum float avg = sum / 64 # calculate the average (still in volts) degF = avg / 10mV # LM34 conversion 1°F = 10mV degC = (degF - 32) * 5 / 9 # celsius conversion Lcd.Home # move the LCD cursor to the home position if language == 0: # display the temperature Lcd.Write "Temperature: " degF "°F " # in English, fahrenheit elif language == 1: Lcd.Write "Temperatur: " degC "°C " # celsius in german, swedish, norwegian, danish, etc. elif language == 2: Lcd.Write "オンド" degC "°C " # in Japanese, celsius else: Lcd.Write "Tymheredd: " degC "°C " # in Welsh, fahrenheit send degF "°F\r\n" # write the temperature to the serial port Some features of the language: Syntax borrows heavily from Python. Less hassle reading and writing individual bits of different ports. For example, here's the function in Adc.oq for writing to MUX3:0 in the ADMUX register: def SetChannel(uint4 mux): MUX3:0 = mux Using custom pins is easier. Here's how to define a pin as an output pin: pins: out PC4 led Here's how you would set that pin to high: +led Or to low: -led Or to toggle it: ~led Or to either high or low depending on whether a button is up or down: led = button It does units: wait 50ms degF = Adc.GetVoltage() / 10mV Take a look at the sample.oq above, and the Adc.oq, HD44780.oq, and Usart.oq in the ZIP file for examples of other things you can do. LED blinking program in this language: pins: out PB1 led every 500ms: ~led Wow, bretm that is really neat. I am probable a typical target user, I can see that most everything I normally do can be done. Now as soon as I can find some time I'll try it out. Ralph I just installed the NET4 framework last night and just might take a serious look into this one. If it makes things easier, I am all for it. Basic syntax is like Python--whitespace is important. It uses indents to indicate the block structure. Tab stops are considered to be every 4 spaces when determining if lines are indented equivalently. Statements that expected an indented block afterword will end with ":". If it doesn't end with ":" it's a one-line statement. The "pins:" block defines friendly names for pins or groups of pins and specifies their direction. If a pin is set to "out" it's an output pin and you won't be able to read its value in an expression. If a pin is set to "in" or "in floating" it's an input pin and you won't be able to set it's value. If it is set to "io" then you can do either, and change its direction at run time. Examples: pins: in PB1 inputWithPullup in floating PB2 inputWithoutPullup out PB3 myOutputPin io PB4 aPinIWillControl +myOutputPin # set pin to HIGH myOutputPin = 1 # set pin to HIGH -myOutputPin # set pin to LOW myOutputPin = 0 # set pin to LOW ~myOutputPin # toggle the pin !aPinIWillControl # set pin to output ?aPinIWillControl # set pin to input ^aPinIWillControl # toggle pin direction if inputWithPullup: # do something if pin is HIGH if not [inputWithPullup]: # do something if debounced pin is LOW if not {inputWithoutPullup}: # do something if denoised pin is low Debouncing and denoising are done by checking the pin every 5ms. For debouncing, after a pin change is seen, further changes are ignored for the next 40ms. For denoising, pin changes are ignored until the new pin state has persisted for 40ms. You can also define groups of pins on the same port. pins: out PB3,PB2,PB1,PB0 knightRider knightRider = 0b0001 wait 100ms knightRider = 0b0010 wait 100ms knightRider = 0b0100 wait 100ms knightRider = 0b1000 wait 100ms You can respond to pin changes (or a change in any condition) with "when". when button falls: # do something if a pin goes low when sensor rises: # do something if a pin goes high when button1 or button2 and not button3 changes: # do something if the boolean expression changes value Variables are defined by "sram", "progmem", "eeprom", or "persistent" statements. There is only limited-to-no support for string variables. It's mostly just numbers for now. sram: # define variables in SRAM uint6 x # 6 bit unsigned integer (0 to 63) int16 y # 16 bit signed integer (signed must be 8, 16, 32) bit z # 1 bit unsigned integer float u # 32 bit floating-point number eeprom: uint8 state = 3 # define EEPROM variable # and set it to 3 if EEPROM is uninitialized Reading and writing EEPROM variables works just like regular variables. Be careful you don't change their value very often because they have a limited lifetime. If you redefine the program to define different EEPROM variables or change the default values, all of the EEPROM variables will be reinitialized. The "persistent" statement is used inside a "def" statement to define the equivalent of static local variables in "C", which are variables local to a function but that keep their values between function calls. They use SRAM. Some pseudo-multi-tasking: pins: out PB1 led1 out PB2 led2 out PB3 led3 at 1Hz: ~led1 at 2.71828Hz: ~led2 every 314159us: ~led3 These commands happen inside a timer interrupt, so they need to be quick. Also, you can't access global variables from inside an interrupt. If you need to share data with the main loop you need to use a queue: queues: fred[4] uint8 # define a queue that can hold 4 bytes at 1Hz: put 25 onto fred # put a "25" into the queue every second loop: # main program loop # do stuff in my main loop # and check if fred has any messages for msg in fred: # do something with each msg This correctly handles "volatile" issues and ensures that data-sharing operations are done atomically to avoid race conditions. bretm, pardon my being dense but: [quote] Install .NET, download the compiler, put this "sample.oq" file in the same folder [/quote] In the "same folder" as what? The compiler? That might take a bit to find. Or is there a development structure implied like nerdkits/code/initialload? Is there a .zip folder of your trial programs(?)/scripts? I just fired up my Windows box to give it a try! Got it!! Sorry I should have tried doing it before asking. The "folder" is wherever you unzip the oq.zip file to, so use that. I used D:nerdkitsBRETM_scripting. Copied the sample.oq code from above to the folder and ran oq sample.oq from the command line (cd D:nerdkitsBRETM_scripting of course). Now lets see what trouble I can get into. Oh this is cute: When you compile the script the source code is deleted or at least my text editor (Notepad++) thinks it is. It is just a mistake on the editors part sample.oq is still in the folder. If I had closed the editor before running the compiler I would not have seen the message, but I usually do not close the editor so I'll just live with another nagging, useless, stupid application Warning message, after all this is Windows. Well this is pretty neat! I've always been a fan of c, but this looks like it could save me some time. I am actually tempted to set avr on windows so I can try it out. I tried compiling the generated c with avr-gcc but it didn't work, tons of errors. I was probably doing something wrong though. I need to take the time to learn how to use avr-gcc one of these days instead of just modifying makefiles... Right now it's easiest if you use the same folder as rhe compiler because I didn't make an installer app to set up the PATH environment variables to support a more traditional compiler set up. I will be doing that eventually. Let me know what avr-gcc errors you were getting, but I think I know: I forgot to talk about the makefile! The generated C references some include files like "oq.h" that need to be findable by the compiler, and "oq.c" and "oqwait.c" need to be compiled along with your program. My makefile for sample.oq: PROJ = sample MCU = atmega328p MCUb = m328p OBJS = INCS = Oq.c OqWait.c include maketemplate.inc And makefiletemplate.inc: ELF = $(PROJ).elf HEX = $(PROJ).hex EEP = $(PROJ).eep ASM = $(PROJ).asm all: ## Translate oq $(PROJ).oq ## Compile and link avr-gcc -g -Os -mmcu=$(MCU) $(PROJ).c $(INCS) $(OBJS) -o $(ELF) ## Generate HEX image for FLASH avr-objcopy -O ihex $(ELF) $(HEX) ## Generate EEP image for EEPROM avr-objcopy -j .eeprom --set-section-flags=.eeprom="alloc,load" --change-section-lma .eeprom=0 --no-change-warnings -O ihex $(ELF) $(EEP) ## Generate assembly listing (informational) avr-objdump -S -d $(ELF) >$(ASM) ## Show sizes avr-size -C --mcu=$(MCU) $(ELF) ## Upload avrdude -c avr109 -p $(MCUb) -b 115200 -P COM4 -U flash:w:$(HEX):a This builds sample.oq all the way to .hex file format. Okay, I successfully compiled the generated c file then uploaded it to the chip, then I realized this was the temp sensor project :/ I'm goin to try again only with a simple blinking led that way I can see it work without rewiring my lcd which is currently in use for something else. ... And it works like a charm. :D Another feature of the language is that it's easy to break a value up into its constituent bits, or to concatenate bits into a larger value. uint8 abyte = 123 # an 8-bit variable hi, lo = abyte bits 4:4 # the high and low 4 bits from "abyte" abyte = hi:lo # combined back into a byte float pi = 3.14 # extract the IEEE fields from a float sign, exponent, mantissa = pi bits 1:8:23 If you don't care about some of the bits, you can do this: x, _ = abyte bits 2:6 # grab top two bits and ignore the rest So [quote] x, _ = abyte bits 2:6 # grab top two bits and ignore the rest [/quote] Says to Let x = the top two bits and _ is a throwaway? Is _ always a throughway when used in an assignment? Or can it hold a value? Can _ be referenced? It's always thrown away and cannot be referenced. No C assignment code is generated for it. It's also useful when you have a function that returns multiple values and you don't care about some of them: def MyFunc(): return 123, 456 _, second = MyFunc() Multiple variable assignment can also be used directly in an assignment, e.g. to swap two values: a, b = b, a Please log in to post a reply.
http://www.nerdkits.com/forum/thread/1749/
CC-MAIN-2018-09
refinedweb
2,111
72.66
Full Circle THE INDEPENDENT MAGAZINE FOR THE UBUNTU LINUX COMMUNITY ISSUE #51 - July 2011 HOW TO: USE KDE 4.6 - PART 2 DESKTOP EFFECTS AND APPLICATION EQUIVALENTS full circle magazine #51 1 full circle magazine is neither affiliated with, nor endorsed by, Canonical Ltd. contents ^ Full Circle HowTo Program In Python Pt25 THE INDEPENDENT MAGAZINE FOR THE UBUNTU LINUX COMMUNITY p.07 Linux News p.04 My Desktop Opinions My Story p.27 My Opinion p.28 p.42 LibreOffice Pt6 p.15 Ubuntu Dev. Pt3 p.17 Command & Conquer p.05 Ubuntu Games p.37 I Think... p.29 Use KDE 4.6 Pt2 p.20 Linux Labs p.24 Q&A p.39 Review p.32 Write For Full Circle p.23 Ubuntu Women p.36 Top 5 p.44 Letters p.34 Columns circle magazineendorsement. #51 2 magazine should in no way be assumed to have Canonical contents ^ EDITORIAL This magazine was created using : Welcome to another issue of Full Circle! I have to say, I was very taken aback by all those requesting more KDE articles from me. I had assumed that KDE was still quite 'fringe', and not widely used. Seems I was very wrong. Even last month's question showed that KDE, while miles behind Gnome, is still quite popular, and that may increase as people take a dislike to Gnome-Shell. For the second KDE article, I've focused on enabling desktop effects and listing some KDE equivalents to Gnome applications. Oh, and If you're wondering how to install KDE on your Ubuntu based distro, then you should check out the letters page. The Python and LibreOffice series continue, and the Ubuntu Development series reaches part three, where Daniel shows how to submit a bug fix. If family history is more your thing, then have a look at this month's review of GRAMPS - the genealogy software. Starting next month, David Rowell will begin a series of articles showing how to use GRAMPS - beginning with creating a new database and entering names and details. So, keep an eye out for that. My pile of My Desktop and My Story articles is getting quite low, so now is a good time to submit your desktop/story articles. Please include some info on how you got your desktop to look the way it does. But, don't let me stop you if you want to write an article on something else. All articles are welcome! Full Circle Podcast Released every two weeks,’t fit in the main podcast. Hosts: Robin Catling Ed Hewitt Dave Wilkins All the best, and keep in touch. Ronnie ronnie@fullcirclemagazine.org full circle magazine #51 3 contents ^ LINUX NEWS KDE 4.7 Released Indian Courts To Use Ubuntu Humble Indie Bundle #3 Humble Indie Bundle #3 has just been released. The games this time are: Crayon Physics Deluxe, Cogs, VVVVVV, Hammerfight and And Yet It Moves. As the website explains:. • Plasma Workspaces Become More Portable • Updated KDE Applications • Improved Multimedia, Instant Messaging and Semantic Capabilities • Instant Messaging integrated into desktop • Stability As Well As Features Source: KDE.org If you bought these five games separately, it would cost around $50, but we're letting you set the price! All of the games work great on Mac, Windows, and Linux. As of writing the average Linux payment is $10.37. Average Mac payment is $5.43 with Windows at $3.47 Source: humblebundle.com #51 4 For the past four years all Indian Courts have been using RedHat Enterprise 5. Now, the Supreme Court of India has directed all Courts (approximately 17,000 of them) to change to Ubuntu 10.04. The Supreme Court of India has also given all Courts a customized Ubuntu DVD. Each Court uses at least five computers. That's five computers multiplied by seventeen thousand Courts. That's 85,000 computers that will get Ubuntu The Supreme Court of India committee page in which all the Indian Courts are directed to install Ubuntu is at: Source: A.Ramesh Babu (email) contents ^ COMMAND & CONQUER Dzen2 & Conky Written by Lucas Westermann R ecently, I made the decision to move from WMFS (Window Manager From Scratch) to XMonad, since WMFS had started to present some issues when handling certain windows and layouts. Once I had made the switch, I was fighting with xmobar to get it working. Luckily, a guy on the ArchLinux Forum made the suggestion that I use Conky with dzen2 for my panel instead of Conky with xmobar, as I was trying to do. And so, I will be covering how to create your own status bar using dzen2 and Conky. Before I start I'd like to note that I am using a version of dzen2 that has xft support enabled. If you happen across a line in my configuration files/examples that is in the format “Togoshi Gothic:size=9â€?, you'll need to replace it with a font from xfontsel, or else try the dzen2 packages from er/+archive/ppa/+packages, which seem to have XFT support. For those of you interested in my entire xmonad.hs, it's listed in the further reading section. Below is my .conkyrc that I use for the status bar. I'll cover the important lines and explain what the scripts do. I won't be including my scripts, since they are either only for ArchLinux or are used for programs (like MPD and Dropbox) that not everyone uses. If you want a specific script, feel free to email me. background no out_to_console yes out_to_x no update_interval 2 total_run_times 0 use_spacer none TEXT ${execi 1 /usr/bin/mpd-info} | Dropbox: ${execi 5 echo $(dropbox status)} | $memperc% ($mem) | Updates: ${execi 300 python ~/Dropbox/Scripts/conky/packa ges-short} | ${execi 60 python ~/Dropbox/Scripts/conky/gmail .py} Email(s) | full circle magazine #51 ^fg(\#9F6B00)${time %a %b %d %H:%M}^fg() The first line disables the background, and the next two disable the graphical aspect, so that Conky simply returns a string. The update_interval tells Conky how often to refresh the information. Total_run_times tells Conky to exit after a certain number of runs. Set it to 0 for infinite number of runs. User_spacer none tells it to not space out the commands below TEXT, since I do it by hand. The following line of commands does the following: <artist>-<song> | Dropbox: <status> | % (<used RAM>) | Updates: <# of updates> | # new Email(s) | <clock>. The clock is wrapped in ^fg(\#9F6B00)^fg(), so that dzen2 prints it in a nice gold colour, which matches my currently selected workspace (separate 5 dzen2 instance). To see a screenshot, check the second link in the Further Reading section. Once you've decided on your .conkyrc, you'll need to decide on the switches you want to use with dzen2. For that, you'll need to know the following switches: -fg <hex> - sets the foreground colour using the hex value for the colour -bg <hex> - sets the background colour using the hex value for the colour -fn <font> - sets the font -h <size in pixels> - sets the height -y <y-coordinate> - shifts the bar up/down -x <x-coordinate> - shifts the bar left/right -w <pixels> - sets width of the bar -sa <l,c,r> - set alignment of slave window -ta <l,c,r> - set alignment of title windows -xs <screen> - set the screen to display on. contents ^ COMMAND & CONQUER An example of how I call dzen2 for my workspaces (not the dzen2 instance with Conky): dzen2 -fg '#9c9c9c' -bg '#0c0c0c' -fn 'Togoshi Gothic:size=9' -h 18 -y 0 -w 660 -ta l An example of how I pipe Conky (it's a little more complicated the way I do it in my config file, but it's just easier to manage that way): conky -c ~/.xmonad/.conkyrc_dwm_bar|dz en2 -w 1040 -x 660 -ta r integrated status bar, I'd be interested to see how you guys put this information to use! If you have any questions, comments, or requests, you can reach me at lswest34@gmail.com. Please put “C&C” or “FCM” in the subject line of the email, so it doesn't get lost. Server Circle is a new question and answer site run by techies. Further Reading: – my xmonad.hs alq7 - Screenshot Users with any level of experience can ask technical questions for free about anything server related, and receive answers from trusted experts, who are rated by the community. With time you can earn reputation points, and even financial rewards, by contributing your answers to questions from other people. The x-coordinate is the same as the width of the first bar, so that it lines up. You can also configure some default options for dzen2 using your .Xresources file in the format of: dzen2.<property>: <setting> Example: dzen2.font: "Togoshi Gothic:size=10" Hopefully you've found this useful. For those of you who are going to use this to pretty up your Conky without using lua, or to those of you who run a window manager where there is no Lucas has learned all he knows from repeatedly breaking his system, then having no other option but to discover how to fix it. You can email Lucas at: lswest34@gmail.com. full circle magazine #51 NOTE: Server Circle is not affiliated with, nor endorsed by, Full Circle magazine. 6 contents ^ HOW-TO A Program In Python - Part 25 Written by Greg Walters number of you have commented about the GUI programming articles and how much you've enjoyed them. In response to that, we will start taking a look at a different GUI toolkit called Tkinter. This is the “official” way to do GUI programming in Python. Tkinter has been around for a long time, and has gotten a pretty bad rap for looking “old fashioned”. This has changed recently, so I thought we'd fight that bad thought process. PLEASE NOTE – All of the code presented here is for Python 2.x only. In an upcoming article, we'll discuss how to use tkinter in Python 3.x. If you MUST use Python 3.x, change the import statements to “from tkinter import *”. A Little History And A Bit Of Background Tkinter stands for “Tk interface”. Tk is a programming language all on its own, and the Tkinter module allows us to use the GUI functions there. There are a number of widgets that come natively with the Tkinter module. Some of them are Toplevel (main window) container, Buttons, Labels, Frames, Text Entry, CheckButtons, RadioButtons, Canvas, Multiline Text entry, and much more. There are also many modules that add functionallity on top of Tkinter. This month, we'll focus on four widgets. Toplevel (from here I'll basically refer to it as the root window), Frame, Labels, and Buttons. In the next article, we'll look at more widgets in more depth. Basically, we have the Toplevel container widget which contains (holds) other widgets. This is the root or master window. Within this root window, we place the widgets we want to use within our program. Each widget, other than the Toplevel root widget container, has a parent. The parent doesn't have to be the root window. It can be a different widget. We'll explore that next month. For this month, everything will have a parent of the root full circle magazine #51 window. COLUMNS ROWS | 0,0 | | 0,1 | 0,2 | 0,3 In order to place and display the child widgets, we have to use what's called “geometry management”. It's how things get put into the main root window. Most programmers use one of three types of geometry management, either Packer, Grid, or Place management. In my humble opinion, the Packer method is very clumsy. I'll let you dig into that on your own. The Place management method allows for extremely accurate placement of the widgets, but can be complicated. We'll discuss the Place method in a future article set. For this time, we'll concentrate on the Grid method. | | | | > 1,0 1,1 1,2 1,3 | | | | 2,0 2,1 2,2 2,3 | | | | 3,0 3,1 3,2 3,3 | | | | 4,0 4,1 4,2 4,3 | | | | So parent has the grid, the widgets go into the grid positions. At first glance, you might think that this is very limiting. However, widgets can span multiple grid positions in either the column direction, the row direction, or both. Our First Example Our first example is SUPER simple (only four lines), but shows a good bit. from Tkinter import * root = Tk() Think of a spreadsheet. There are rows and columns. Columns are vertical, rows are horizontal. Here's a simple text representation of the cell addresses of a simple 5-column by 4-row grid (above right). 7 button = Button(root, text = "Hello FullCircle").grid() root.mainloop() Now, what's going on here? Line one imports the Tkinter contents ^ PROGRAM IN PYTHON - PART 25 library. Next, we instantiate the Tk object using root. (Tk is part of Tkinter). Here's line three. button = Button(root, text = "Hello FullCircle").grid() class App: def __init__(self, master): frame = Frame(master) self.lblText = Label(frame, text = "This is a label widget") self.btnQuit = Button(frame, text="Quit", fg="red", command=frame.quit) self.btnHello = Button(frame, text="Hello", command=self.SaySomething) frame.grid(column = 0, row = 0) self.lblText.grid(column = 0, row = 0, columnspan = 2) self.btnHello.grid(column = 0, row = 1) self.btnQuit.grid(column = 1, row = 1) We create a button called button, set its parent to the root window, set its text to “Hello FullCircle,” and set it into the grid. Finally, we call the window's main loop. Very simple from our perspective, but there's a lot that goes on behind the scenes. Thankfully, we don't need to understand what that is at this time. This is the import statement for the Tkinter library. Run the program and let's see what happens. On my machine the main window shows up at the lower left of the screen. It might show up somewhere else on yours. Clicking the button doesn't do anything. Let's fix that in our next example. The first line in the __init__ routine creates a frame that will be the parent of all of our other widgets. The parent of the frame is the root window (Toplevel widget). Next we define a label, and two buttons. Let's look at the label creation line. Our Second Example This time, we'll create a class called App. This will be the class that actually holds our window. Let's get started. from Tkinter import * We define our class, and, in the __init__ routine, we set up our widgets and place them into the grid. self.lblText = Label(frame, text = "This is a label widget") We create the label widget and call it self.lblText. That's inherited from the Label widget object. We set its parent (frame), and set the full circle magazine #51 text that we want it to display (text = “this is a label widget”). It's that simple. Of course we can do much more than that, but for now that's all we need. Next we set up the two Buttons we will use: built in function, so we don't need to actually create it. In the case of btnHello, it's a routine called self.SaySomething. This we have to create, but we have a bit more to go through first. self.btnQuit = Button(frame, text="Quit", fg="red", command=frame.quit) We need to put our widgets into the grid. Here's the lines again: self.btnHello = Button(frame, text="Hello", command=self.SaySomething) frame.grid(column = 0, row = 0) We name the widgets, set their parent (frame), and set the text we want them to show. Now btnQuit has an attribute marked fg which we set to “red”. You might have guessed this sets the foreground color or text color to the color red. The last attribute is to set the callback command we want to use when the user clicks the button. In the case of btnQuit, it's frame.quit, which ends the program. This is a 8 self.lblText.grid(column = 0, row = 0, columnspan = 2) self.btnHello.grid(column = 0, row = 1) self.btnQuit.grid(column = 1, row = 1) First, we assign a grid to the frame. Next, we set the grid attribute of each widget to where we want the widget to go. Notice the columnspan line for the label (self.lblText). This says that we contents ^ PROGRAM IN PYTHON - PART 25 want the label to span across two grid columns. Since we have only two columns, that's the entire width of the application. Now we can create our callback function: def SaySomething(self): print "Hello to FullCircle Magazine Readers!!" This simply prints in the terminal window the message “Hello to FullCircle Magazine Readers!!” Finally, we instantiate the Tk class - our App class - and run the main loop. Give it a try. Now things actually do something. But again, the window position is very inconvenient. Let's fix that in our next example. Our Third Example Save the last example as example3.py. Everything is exactly the same except for one line. It's at the bottom in our main routine calls. I'll show you those lines with our new one: root = Tk() root = Tk() root.geometry('150x75+550+150 ') app = App(root) app = App(root) root.mainloop() root.mainloop() class Calculator(): def __init__(self,root): master = Frame(root) self.CurrentValue = 0 self.HolderValue = 0 self.CurrentFunction = '' self.CurrentDisplay = StringVar() self.CurrentDisplay.set('0') self.DecimalNext = False self.DecimalCount = 0 self.DefineWidgets(master) self.PlaceWidgets(master) What this does is force our initial window to be 150 pixels wide and 75 pixels high. We also want the upper left corner of the window to be placed at Xpixel position full circle magazine #51 550 (right and left) and the Y-pixel position at 150 (top to botton). How did I come up with these numbers? I started with some reasonable values and tweaked them from there. It's a bit of a pain in the neck to do it this way, but the results are better than not doing it at all. Our Fourth Example - A Simple Calculator ----------------| 0 | ----------------| 1 | 2 | 3 | + | ----------------| 4 | 5 | 6 | - | ----------------| 7 | 8 | 9 | * | ----------------| - | 0 | . | / | ----------------| = | ----------------| CLEAR | ----------------- Now, let's look at something a bit more from Tkinter import * complicated. This time, we'll create a simple “4 def StartUp(): global val, w, root banger” calculator. If root = Tk() you don't know, the root.title('Easy Calc') phrase “4 banger” root.geometry('247x330+469+199') means four functions: w = Calculator(root) root.mainloop() Add, Subtract, Multiply, and Divide. and then move on. Right is what it looks like in simple text form. We begin our class definition We'll dive right into it and I'll explain the code (middle right) as we go. Outside of the geometry statement, this (left) should be pretty easy for you to understand by now. Remember, pick some reasonable values, tweak them, 9 and set up our __init__ function. We set up three variables as follows: • CurrentValue – Holds the current value that has been input into the calculator. • HolderValue – Holds the value that existed before the user clicks a function key. contents ^ PROGRAM IN PYTHON - PART 25 • CurrentFunction – This is simply a “bookmark” to note what function is being dealt with. Next, we define the CurrentDisplay variable and assign it to the StringVar object. This is a special object that is part of the Tkinter toolkit. Whatever widget you assign this to automatically updates the value within the widget. In this case, we will be using this to hold whatever we want the display label widget to... er... well... display. We have to instantiate it before we can assign it to the widget. Then we use the built in 'set' function. We then define a boolean variable called DecimalNext, and a variable DecimalCount, and then call the DefineWidgets function, which creates all the widgets, and then call the PlaceWidget function, which actually places them in the root window. def DefineWidgets(self,master): self.lblDisplay = Label(master,anchor=E,relief = SUNKEN,bg="white",height=2,te xtvariable=self.CurrentDispla y) Now, we have self.btn1 = Button(master, text = '1',width = 4,height=3) already defined a self.btn1.bind('<ButtonRelease-1>', lambda e: self.funcNumButton(1)) label earlier. self.btn2 = Button(master, text = '2',width = 4,height=3) self.btn2.bind('<ButtonRelease-1>', lambda e: self.funcNumButton(2)) However, this time self.btn3 = Button(master, text = '3',width = 4,height=3) we are adding a self.btn3.bind('<ButtonRelease-1>', lambda e: self.funcNumButton(3)) number of other self.btn4 = Button(master, text = '4',width = 4,height=3) attributes. Notice self.btn4.bind('<ButtonRelease-1>', lambda e: self.funcNumButton(4)) that we aren't using the 'text' attribute. characters and the height is in text background (bg) to white in order Here, we assign the label to the lines. If you were doing a graphic to set it off from the rest of the parent (master), then set the in the button, you would use pixels window a bit. We set the height to anchor (or, for our purposes, to define the height and width. 2 (which is two text lines high, not justification) for the text, when it This can become a bit confusing in pixels), and finally assign the gets written. In this case, we are variable we just defined a moment until you get your head firmly telling the label to justify all text wrapped around it. Next, we are ago (self.CurrentDisplay) to the to the east or on the right side of setting the bind attribute. When textvariable attribute. Whenever the widget. There is a justify we did the buttons in the previous the value of self.CurrentDisplay attribute, but that's for multiple examples, we used the changes, the label will change its lines of text. The anchor attribute 'command=' attribute to define text to match automatically. has the following options... N, NE, what function should be called E, SE, S, SW, W, NW and CENTER. Shown above, we'll create some when the user clicks the button. The default is CENTER. You should This time, we are using the '.bind' of the buttons. think compass points for these. attribute. It's almost the same Under normal circumstances, the thing, but this is an easier way to I've shown only 4 buttons here. only really usable values are E That's because, as you can see, the do it, and to pass information to (right), W (left), and Center. the callback routine that is static. code is almost exactly the same. Notice that here we are using Again, we've created buttons Next, we set the relief or visual '<ButtonRelease-1>' as the trigger earlier in this tutor, but let's take a style of the label. The “legal” for the bind. In this case, we want closer look at what we are doing options here are FLAT, SUNKEN, to make sure that it's only after here. RAISED, GROOVE, and RIDGE. The the user clicks AND releases the default is FLAT if you don't specify left mouse button that we make We start by defining the parent anything. Feel free to try the other our callback. Lastly, we define the (master), the text that we want on combinations on your own after callback we want to call, and what the button, and the width and we're done. Next, we set the we are going to pass to it. Now, height. Notice that the width is in full circle magazine #51 10 contents ^ PROGRAM IN PYTHON - PART 25 those of you who are astute (which is each and every one of you) will notice something new. The 'lambda e:' call. In Python, we use Lambda to define anonymous functions that will appear to interpreter as a valid statement. This allows us to put multiple segments into a single line of code. Think of it as a mini function. In this case, we are setting up the name of the callback function and the value we want to send as well as the event tag (e:). We'll talk more about Lambda in a later article. For now, just follow the example. I've given you the first four buttons. Copy and paste the above code for buttons 5 through 9 and button 0. They are all the same with the exception of the button name and the value we send the callback. Next steps are shown right. The only thing that hasn't been covered before are the columnspan and sticky attributes. As I mentioned before, a widget can span more than one column or row. In this case, we are “stretchingâ€? the label widget across all four columns. That's self.btnDash = Button(master, text = '-',width = 4,height=3) self.btnDash.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('ABS')) self.btnDot = Button(master, text = '.',width = 4,height=3) self.btnDot.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('Dec')) The btnDash sets the value to the absolute value of the value entered. 523 remains 523 and -523 becomes 523. The btnDot button enters a decimal point. These examples, and the ones below, use the callback funcFuncButton. self.btnPlus = Button(master,text = '+', width = 4, height=3) self.btnPlus.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('Add')) self.btnMinus = Button(master,text = '-', width = 4, height=3) self.btnMinus.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('Subtract')) self.btnStar = Button(master,text = '*', width = 4, height=3) self.btnStar.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('Multiply')) self.btnDiv = Button(master,text = '/', width = 4, height=3) self.btnDiv.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('Divide')) self.btnEqual = Button(master, text = '=') self.btnEqual.bind('<ButtonRelease-1>', lambda e: self.funcFuncButton('Eq')) Here are the four buttons that do our math functions. self.btnClear = Button(master, text = 'CLEAR') self.btnClear.bind('<ButtonRelease-1>', lambda e: self.funcClear()) Finally, here is the clear button. It, of course, clears the holder variables and the display. Now we place the widgets in the PlaceWidget routine. First, we initialize the grid, then start putting the widgets into the grid. Here's the first part of the routine. def PlaceWidgets(self,master): master.grid(column=0,row=0) self.lblDisplay.grid(column=0,row=0,columnspan = 4,sticky=EW) self.btn1.grid(column = 0, row = 1) self.btn2.grid(column = 1, row = 1) self.btn3.grid(column = 2, row = 1) self.btn4.grid(column = 0, row = 2) self.btn5.grid(column = 1, row = 2) self.btn6.grid(column = 2, row = 2) self.btn7.grid(column = 0, row = 3) self.btn8.grid(column = 1, row = 3) self.btn9.grid(column = 2, row = 3) self.btn0.grid(column = 1, row = 4) full circle magazine #51 11 contents ^ PROGRAM IN PYTHON - PART 25 what the “columnspan” attribute does. There's a “rowspan” attribute as well. The “sticky” attribute tells the widget where to align its edges. Think of it as how the widget fills itself within the grid. Above left is the rest of our buttons. Before we go any further let's take a look at how things will work when the user presses buttons. Let's say the user wants to enter 563 + 127 and get the answer. They will press or click (logically) 5, then 6, then 3, then the “+,” then 1, then 2, then 7, then the “=” buttons. How do we handle this in code? We have already set the callbacks for the number buttons to the funcNumButton function. There's two ways to handle this. We can keep the information entered as a string and then when we need to convert it into a number, or we can keep it as a number the entire time. We will use the latter method. To do this, we will keep the value that is already there (0 when we start) in a variable called “self.CurrentValue”, When a number comes in, we take the variable, multiply it by 10 and add the new value. So, when the user self.btnDash.grid(column = 0, row = 4) self.btnDot.grid(column = 2, row = 4) self.btnPlus.grid(column = 3,row = 1) self.btnMinus.grid(column = 3, row = 2) self.btnStar.grid(column = 3, row = 3) self.btnDiv.grid(column=3, row = 4) self.btnEqual.grid(column=0,row=5,columnspan = 4,sticky=NSEW) self.btnClear.grid(column=0,row=6,columnspan = 4, sticky = NSEW) def funcNumButton(self,val): if self.DecimalNext == True: self.DecimalCount += 1 self.CurrentValue = self.CurrentValue + (val * (10**-self.DecimalCount)) else: self.CurrentValue = (self.CurrentValue * 10) + val self.DisplayIt() enters 5, 6 and 3, we do the following... User clicks 5 – 0 * 10 + 5 (5) User clicks 6 – 5 * 10 + 6 (56) User clicks 3 – 56 * 10 + 3 (563) Of course we then display the “self.CurrentValue” variable in the label. Next, the user clicks the “+” key. We take the value in “self.CurrentValue” and place it into the variable “self.HolderValue,” and reset the “self.CurrentValue” to 0. We then full circle magazine #51 repeat the process for the clicks on 1, 2 and 7. When the user clicks the “=” key, we then add the values in “self.CurrentValue” and “self.HolderValue”, display them, then clear both variables to continue. Above is the code to start defining our callbacks. The “funcNumButton routine receives the value we passed from the button press. The only thing that is different from the example above is what if the user pressed the decimal button (“.”). Below, you'll see that we use a boolean variable to hold the fact they pressed the decimal button, and, 12 on the next click, we deal with it. That's what the “if self.DecimalNext == True:” line is all about. Let's walk through it. The user clicks 3, then 2, then the decimal, then 4, to create “32.4”. We handle the 3 and 2 clicks through the “funcNumButton” routine. We check to see if self.DecimalNext is True (which in this case it isn't until the user clicks the “.” button). If not, we simply multiply the held value (self.CurrentValue) by 10 and add the incoming value. When the user clicks the “.”, the callback “funcFuncButton” is called with the “Dec” value. All we do is set the boolean variable contents ^ PROGRAM IN PYTHON - PART 25 “self.DecimalNext” to True. When the user clicks the 4, we will test the “self.DecimalNext” value and, since it's true, we play some magic. First, we increment the self.DecimalCount variable. This tells us how many decimal places we are working with. We then take the incoming value, multiply it by (10**-self.DecimalCount). Using this magic operator, we get a simple “raised to the power of” function. For example 10**2 returns 100. 10**-2 returns 0.01. Eventually, using this routine will result in a rounding issue, but for our simple calculator, it will work for most reasonable decimal numbers. I'll leave it to you to work out a better function. Think of this as your homework for this month. The “funcClear” routine simply clears the two holding variables, then sets the display. def funcClear(self): self.CurrentValue = 0 self.HolderValue = 0 self.DisplayIt() Now the functions. We've already discussed what happens with the function 'Dec'. We set this def funcFuncButton(self,function): if function =='Dec': self.DecimalNext = True else: self.DecimalNext = False self.DecimalCount = 0 if function == 'ABS': self.CurrentValue *= -1 self.DisplayIt() The ABS function simply takes the current value and multiplies it by -1. elif function == 'Add': self.HolderValue = self.CurrentValue self.CurrentValue = 0 self.CurrentFunction = 'Add' The Add function copies “self.CurrentValue” into “self.HolderValue”, clears “self.CurrentValue”, and sets the “self.CurrentFunction” to “Add”. The Subtract, Multiply and Divide functions do the same thing with the proper keyword being set in “self.CurrentFunction”. elif function == 'Subtract': self.HolderValue = self.CurrentValue self.CurrentValue = 0 self.CurrentFunction = 'Subtract' elif function == 'Multiply': self.HolderValue = self.CurrentValue self.CurrentValue = 0 self.CurrentFunction = 'Multiply' elif function == 'Divide': self.HolderValue = self.CurrentValue self.CurrentValue = 0 self.CurrentFunction = 'Divide' The “Eq” function (Equals) is where the “magic” happens. It will be easy for you to understand the following code by now. elif function == 'Eq': if self.CurrentFunction == 'Add': self.CurrentValue += self.HolderValue elif self.CurrentFunction == 'Subtract': self.CurrentValue = self.HolderValue - self.CurrentValue elif self.CurrentFunction == 'Multiply': self.CurrentValue *= self.HolderValue elif self.CurrentFunction == 'Divide': self.CurrentValue = self.HolderValue / self.CurrentValue self.DisplayIt() self.CurrentValue = 0 self.HolderValue = 0 full circle magazine #51 13 contents ^ PROGRAM IN PYTHON - PART 25 one up first with the “if” statement. We go to the “else,” and if the function is anything else, we clear the “self.DecimalNext” and “self.DecimalCount” variables. The next set of steps are shown on the previous page (right hand box). The DisplayIt routine simply sets the value in the display label. Remember we told the label to “monitor” the variable “self.CurrentDisplay”. Whenever it changes, the label automatically changes the display to match. We use the “.set” method to change the value. and give it a test. As always, the code for this article can be found at PasteBin. Examples 1, 2 and 3 are at: and the Calc.py example is at: Next month, we will continue looking at Tkinter and its wealth of widgets. In a future article, we'll look at a GUI designer for tkinter called PAGE. In the meantime, have fun playing. I think you'll enjoy Tkinter. def DisplayIt(self): print('CurrentValue = {0} HolderValue = {1}'.format(self.CurrentValue ,self.HolderValue)) self.CurrentDisplay.set(self. CurrentValue) Finally we have our startup lines. if __name__ == '__main__': StartUp() Now you can run the program Greg Walters is owner of RainyDay Solutions, LLC, a consulting company in Colorado and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. His website is. full circle magazine #51 14 contents ^ HOW-TO I Libre Office - Part 6 Written by Elmer Perry n this month's article, we will discover a few new ways to format our documents using page styles, headers and footers. In past articles, I have discussed the use of paragraph and character styles. Page styles are similar, but deal with the overall geometry and formatting of the entire page. Headers and footers are the area at the top and bottom of the page, and are usually the same on every page of the same style. We will start by setting up our document and styles. Start a new writer document, File > New. In order to have access to the document's title, we will change some of the document's properties, File > Properties. On the description tab, put “This Is The Title” as the title of the document. We will use this later when we start creating our headers and footers. Click OK to save your changes. Now, we need to set up our page styles. We will use three page styles, First Page, Normal Page, and Landscape. First Page and Landscape already exist, but we will modify them. We will create our Normal Page style first. For our normal page style, we want a header area at the top with a light gray background. Open the Styles and Formatting dialog, Tools > Styles and Formatting, or click on the Style and formatting button (right). Click on the page styles button (right), right-click in the window, and select new. The Page Style dialog appears. On the Organizer tab, name the style “Normal Page.” Change the next style to Normal Page. This tells Writer that when we get to the end of the page, it will create a new page with the same style. On the Header tab, check Header On. This inserts a full circle magazine #51 header area on the page. Still on the Header tab, click the More button. A new dialog comes up. This dialog allows us to add borders and background colors to our header. On the Background tab, pick the light gray color for the background. Click OK on both dialogs, and we are finished with our Normal Page style. For our First Page, we will modify the one that already exists. We want a 3” (7.5cm) margin at the top (for first page graphics added at another time), and a light gray footer area at the bottom. Rightclick the First Page style in the Styles and Formatting dialog, and select modify. On the organizer tab, make our Normal Page the next style. The Page tab allows us to change the margins for the page. Make the top margin 3” (7.5cm). This time we will go to the footer tab, check Footer On, click 15 on the More button, and choose our light gray background. For our Landscape page style, we will modify the existing Landscape style. For our Landscape style, we will add both a header and footer. Right-click on the Landscape style and modify. Take a few moments to look at the page tab and notice the orientation for the page is landscape, which is exactly what we wanted. Turn on the header and footer on their respective tabs, and select the light gray background for both. contents ^ LIBRE OFFICE - PART 6 Now, we are ready to create our document. Double-click the First Page style, and the page in your document will change to the formatting we added. You will notice the light gray footer area at the bottom. Click inside the box to edit the footer. We will first add our title, Insert > Fields > Title. This inserts the title we added in the document properties. You can use this method to insert the title of the document anywhere you need it. If you change your title later in the document properties, you can update all instances of the inserted field with Tools > Update > Fields or by pressing F9 on your keyboard. Type “ Page “, remembering to put spaces on either side of the word Page, and insert the page number, Insert > Fields > Page Number. Move your cursor to the beginning of “Page” and press the tab key on your keyboard until the page number is flush against the right side of the footer area. Click out of the footer area into the main body of the page. Once this is done, you can begin to type in your text. Once you reach the end of the page and a new page is inserted, you will notice it is formatted with the Normal Page style with a header area at the top. Fill in the header information just like we did for the footer of the first page. Make sure you use the fields, especially on the page number. The page number field comes in handy when we get to the third page. You will then notice the header information has been copied for you and the page number updated to reflect the current page. Next, we will insert a Landscape page. Before you get to a new page, Insert > Manual Break. Select Page Break, and under the style, select Landscape. This will “ Writer makes it easy to add pages with different styles and orientation, as well as automatic headers and footers. take you to a new page with a Landscape layout. Because this is a different style from our Normal Page style, we will need to fill in our header and footer information. This is handy should you need different header or footer information on some pages, just insert a page with a different page style. Once you have completed your landscaped page, create another page break (Insert > Manual Break) with a style of Normal Page. You will notice your page numbering continues, including the inserted landscaped page(s). If you do not want the inserted landscape pages included in the page count, you can manually adjust the page number in the Manual Break dialog. Writer makes it easy to add pages with different styles and orientation, as well as automatic headers and footers. You can make the headers and footers as big as you want, and they can contain whatever information you want to put in them. Fields help keep certain information consistent in your document, and let you write without worrying about page numbers. In my next article, I will move away from Writer and show you how to make a poor man's database using a Calc spreadsheet. After that, we will use our spreadsheet to create a form letter. Elmer Perry is a children's minister in Asheville, North Carolina whose hobbies include web design, programming, and writing. His website is eeperry.wordpress.com full circle magazine #51 16 contents ^ HOW-TO Written by Daniel Holbach I f you followed the instructions to get set up with Ubuntu Development, you should be all set and ready to go. As you can see in the image shiwn right, there. Finding the problem Ubuntu Development Pt. 3 - Bug Fixing Debian already, lists small bugs (we call them ‘bitesize’), and so on. Check it out and find your first bug to work on. Figuring out what to fix Tomboy, a note taking desktop application. The Tomboy application can be started by running /usr/bin/tomboy on the command line. To find the binary package containing this application, use this command: apt-file find /usr/bin/tomboy full circle magazine #51 This would print out: tomboy: /usr/bin/tomboy: In this case, nothing is printed, meaning that tomboy is also the name of the binary package. An example where the source and binary package names differ is python-vigra. While that is the binary package name, the source package is actually libvigraimpex and can be found with this command (and its output): apt-cache show python-vigra | grep ^Source: Source: libvigraimpex apt-cache show tomboy | grep ^Source: 17 contents ^ HOWTO - UBUNTU DEVELOPMENT 3 - BUG FIXING Getting the code Once you know the source package to work on, you will want to get a copy of the code on your system, so that you can debug it.. Work on a fix: • Upstream (and Debian) bug tracker (open and closed bugs), • Upstream revision history (or newer release) might have fixed the problem, • bugs or package uploads of Debian or other distributions. If you find a patch to fix the problem, say, attached to a bug report, running this command in the source directory should apply the patch: patch -p1 < ../bugfix.patch Refer to the patch(1) manpage full circle magazine #51 for options and arguments such as - guess what your reasoning was -dry-run, -p<num>, etc. and what your assumptions were. Every Debian and Ubuntu package source includes debian/changelog, Testing the fix where changes of each uploaded package are tracked. To build a test package with your changes, run these commands: bzr bd -- . Documenting the fix It is very important to document your change sufficiently so developers who look at the code in the future won’t have to 18 The easiest way to update this is to run: dch -i This will add a boilerplate changelog entry for you and launch an editor where you can fill in the blanks. An example of this could be: specialpackage (1.23ubuntu4) natty;. contents ^ HOWTO - UBUNTU DEVELOPMENT 3 - BUG FIXING With that out of the way, let’s focus on the actual changelog entry itself: it is very important to document: • where the change was done • what was changed •. Committing the fix: Zero Downtime lp:~<yourlpid>/ubuntu/<releas e>/<package>/<branchname> This could, for example, be lp:~emmaadams/ubuntu/natty/sp ecialpackage/fix-for-123456 So, if you just run bzr push lp:~emmaadams/ubuntu/natty/sp ecialpackage/fix-for-123456 bzr lp-open you should be all set. The push command should push it to Launchpad, and the second command will open the Launchpad page of the remote branch in your browser. There, find the “(+) Propose for merging” link, and click it to get the change reviewed by somebody and included in Ubuntu. Next month: an overview of the Debian directory structure. Below Zero we help you to achieve Zero Downtime. w w w. z e r o d o w n t i m e .co. u k full circle magazine #51 19 contents ^ HOW-TO I Use KDE 4.6 2 - Effects Written by Ronnie Tucker t seems there are more KDE users out there than I thought. Quite a few people emailed me asking for a Part Two on Using KDE. So, here it is. I’ll show you how to spice up your KDE desktop by enabling the KWin effects (which are to be thought of as a KDE native Compiz Fusion, but built into KDE), and the toggle effects on/off, and by editing the configuration of some effects. With all effects off, KDE is a bit bland: Head into System > System Settings, and double click Desktop Effects: This is where the magic happens. Tick the box beside ‘Enable desktop effects,’ then click the ‘Apply’ button at the bottom right of the window (right): You’ll get a pop-up which asks you if everything looks OK. You have several seconds to reply by clicking Accept. Your desktop effects are now active! Should your display be unable to enable desktop effects, KDE will tell you this and not black out. It’s very nice that way. Your theme probably won’t show much in the way of effects, so usually I go back to the Desktop Theme window and assign/reassign a Desktop Theme. This makes sure your theme is using your snazzy new desktop effects such as blur or transparency. of all available effects. But the first thing I like to do is assign effects to the screen corners, which is in ‘Workspace Behaviors': In the ‘All Effects’ tab (still in Desktop Effects), you’ll see a list full circle magazine #51 20 contents ^ HOWTO - USE KDE 4.6 2 - EFFECTS I assign a desktop grid to my top left, and the cube to the top right, but you can assign them as you see fit. Going back into Desktop Effects (General tab), you can change how you switch windows. I, personally, prefer Flip Switch, but there are several to choose from. Below that you can change how you switch desktops. I like to slide. Below that you have the animation speeds. Going into the ‘All Effects’ tab again, it’s time to configure your effects. First off, the old classic Wobbly Windows. Clicking the spanner button on the right of each effect name will let you edit that effect’s settings. The Appearance items will let you change how a window is shown or closed. The animations range from Glide, in which the window smoothly fades in from small to large through to other animations which explode the windows into smithereens! You can also customise the key settings to enable or disable an unselected/frozen windows, or shading the background, thus highlighting the admin login and such like. some of the most commonly used and installed-by-default apps in Ubuntu with their KDE kousin. Are there any questions you have about KDE you’d like an article on? Drop me an email to ronnie@fullcirclemagazine.org and I’ll see if I can make your wish come true. effect. In this example, I assigned Ctrl+F12 to start/stop snowflakes falling on my desktop. This is done by clicking the current key binding, clicking Customise, and doing the key combination to assign it. Desktop Effects will also tell you if that key combo is being used elsewhere, and give you the option to assign that key combo to the currently selected effect, thus removing it from its previous effect. There’s a lot of things you can do with the effects that not only make your desktop look pretty, but also help you in your work with features such as dimming/blurring full circle magazine #51 Like last time, I recorded my desktop as I ran through this tutorial, so you can see the above effects being enabled/edited in my video on YouTube: m/watch?v=YSSExO9vT0 Before I leave you to play with your wobbly windows and cubes, I thought I’d give you a list of equivalent applications. It’s daunting to try to find your KDE equivalent of something, so (on the next page) I’ve listed 21 contents ^ HOWTO - USE KDE 4.6 - EQUIVALENTS Honorable Mentions: Ubuntu: Kubuntu: Purpose: Graphics: Evince gThumb Okular GwenView Document Viewer Image Viewer Internet: Evolution Firefox Pidgin Transmission Kmail Rekonq Kopete Ktorrent Office: LibreOffice LibreOffice Office Sound/Video: Brasero Rhythmbox Movie Player K3B Amarok Dragon Player Disc Burning Audio Video Utilities: Nautilus GEdit Screenshot Terminal Archive Manager Dolphin Kate KSnapShot Konsole Ark File Manager Text Editor Screen Grabber Command Entry File Compression full circle magazine #51 Marble is a Virtual Globe and World Atlas that you can use to learn more about Earth: You can pan and zoom around and you can look up places and roads. A mouse click on a place label will provide the respective Wikipedia article. Kdenlive is an intuitive and powerful multi-track video editor, including most recent video technologies. Kfilebox is a small application which allows quick and easy installation of the DropBox client without installing Gnome/Nautilus - Klipper is a clipboard app. The item you copied last will be the default one to be pasted, but others are stored in a buffer, so you can choose to paste your selections in a different order. It also converts URL's to barcodes. 22 contents ^ email it to: articles@fullcirclemagazine.org! full circle magazine #51 REVIEWS Games/Applications When reviewing games/applications please state clearly: • • • • • • •: • • • • • • • contents ^ LINUX LAB A Creating Your Own Repository Written by Frank Denissen ll software installed by default on a Debianbased system (like Ubuntu and Kubuntu) is organized in packages. The packages themselves are stored in a repository. The installation CD contains such a repository, but, in most cases, one accesses a repository via a server, the socalled mirror. Such a mirror gives access to a copy of the original repository created by the owner of the distribution. Any new version of a package is added to the distribution repository and afterwards copied to all mirrors. One system (for example your PC) can obtain packages from one to many repositories. The list of repositories used by a system can be found in the files /etc/apt/sources.list and /etc/apt/sources.list.d/*.list, and can also be found under Settings in GUI packet management tools like Synaptic (Repositories) and kPackageKit (Origin of Packages). The contents of all repositories is reread when we execute the command “apt-get update,” or when we press the Reload button in Synaptic. This allows the tools to verify which packets have new versions and offer them for upgrade. versions are nicely installed on their laptops. That is easy for me as I'm sure that any new package version will find its way to each PC without any further intervention from my side. It is also possible to create your own repository for private use. Procedure Well, I have a number of packages that are not available from the standard repositories. I downloaded packages from vendor sites containing drivers for my all-in-one scanner and my graphical card, I have some packages that are required by these driver packages and that are no longer supported by the newer Ubuntu versions and, finally, I created some packages myself. The creation of a repository takes five steps: • install the packages with the necessary tools • create a digital signature • create the repository directory and the related configuration files • add packages to the repository and build the repository. Repeat this step each time you have added a new package or package version. • make your repository known to the package tools on your system. Repeat this step for each system you manage. I put any new version of such a package in my private repository. When my children come home on the weekend from university, and they push on the Reload button in Synaptic, the new package In case you have multiple systems, you must decide if you want to distribute your repository to the other systems via a webserver (http) or using a directory shared via NFS or Samba. Why would you create a private repository? full circle magazine #51 24 Step 1 Install the packages apt-utils, gzip, make and gnupg. You need additionally a web-server like apache2 in case you want to make your repository accessible via the web. Step 2 If you don't have a digital signature yet, make one now with the command: gpg --gen-key This tool will ask a lot of questions. The most important ones are your name, your e-mail address, and a pass-phrase. A reasonable default is proposed for the other, more difficult questions. Step 3 Make now a directory to store the packages. This directory must finally be accessible by all your systems. /var/www/repository is a good contents ^ LINUX LAB - CREATING YOUR OWN REPOSITORY choice in case you decided to use apache as a web-server. /mnt/repository could be used if you decided to go for NFS or Samba. You will need the following configuration files in this directory: a public key, the steer-file for aptftparchive, and a makefile. You can create the public key with: gpg --export -a > repository.gpg The steer-file aptftparchive.conf can be created with a text editor (e.g. kate), and should have the following contents (above right, but replace “John Doe” with your own name). Create also a makefile (below right), Makefile. Notice that all lines, except the first one, must start with a tab (not spaces !). Step 4 Put your binary and/or source packages in this directory. I will use as example a packet I created myself. It binds commands to the multimedia keys on my Cherry keyboard. The binary package is called cherrykeyboard_1.1_all.deb. The related source packages are cherrykeyboard_1.1.dsc, cherrykeyboard_1.1_i386.changes and cherry-keyboard_1.1.tar.gz. APT { FTPArchive { Release { Origin "John Doe"; Label "John Doe"; Suite custom; Codename private; Architecture any; Description "Private packages by John Doe"; } } } You may place the packages in subdirectories if you like: the aptftparchive tool will scan all subdirectories. When you are ready, go to your repository directory and type “make” on the command line: your repository will be built. You will be prompted for the pass-phrase linked to your digital signature at the end of the execution. Execute “make” again, each time you add a new package or all: package version. This will update the repository. Step 5 The last step is to make your repository known to the package tools on your PC. Make first your public key known to apt, so that it can verify the signature of the repository files: sudo cp repository.gpg /usr/share/keyrings sudo apt-key add /usr/share/keyrings/repositor y.gpg apt-ftparchive packages . > Packages gzip -9 < Packages > Packages.gz apt-ftparchive sources . > Sources gzip -9 < Sources > Sources.gz apt-ftparchive contents . > Contents gzip -9 < Contents > Contents.gz rm Release.gpg || true apt-ftparchive --config-file=apt-ftparchive.conf release . > Release gpg -b -o Release.gpg Release full circle magazine #51 25 contents ^ LINUX LAB - CREATING YOUR OWN REPOSITORY Finally, configure the location of your repository by creating a file /etc/apt/sources.list.d/repository.li st, using for example sudo kate. apt-cache show cherrykeyboard You should now get something like: A PLEA ON BEHALF OF THE PODCAST PARTY The contents depends on the distribution method you have chosen: when you have exported your directory via NFS or Samba as /mnt/repository: cherry-keyboard - Enables multimedia keys on Cherry keyboard As you heard in episode #15 of the podcast, we're calling for opinion topics for that section of the show. deb ./ Instead of us having a rant about whatever strikes us, why not prompt us with a topic and watch for the mushroom clouds over the horizon! It's highly unlikely that the three of us will agree. file:/mnt/repository/. Or, an even more radical thought, send us an opinion by way of a contribution! deb-src file:/mnt/repository/. ./. when you made your repository available via web-server 192.168.0.5: deb y/. ./ deb-src y/. ./ It would be great to have contributors come on the show and express an opinion in person. References: “The Debian System - Concepts We are ready now. Verify that and Techniquesâ€? by Martin F. everything is working fine by executing the following commands: Krafft, 2005, Open Source Press GmbH, Germany, ISBN 3-937514sudo apt-get update 07-4 full circle magazine #51 Robin 26 contents ^ MY STORY A Written by Adel llow me to introduce myself. My name is Adel, and I am a Kazakh from China. (Yeah, a lot of us live in China, since China and Kazakhstan are neighboring countries. In fact, my home is just near the intersection of the four countries: Russia, China, Mongolia, and Kazakhstan.) I met Ubuntu much later than most Full Circle readers - at version 10.04. When I rolled into college in Beijing in 2009, I didn't know much about computers, and I didn't own a computer, since it was too expensive for me. So, I often went to the netbar to enjoy the computers. The cost is 2 yuan per hour, about $0.31 US. I like collecting different software and other things while others are enjoying computer games. In fact I seldom play games. Then I accidentally came across Ubuntu, version 10.04. At first, I just thought it was a program, but soon I found that it was so popular in the software websites. Yes, it is an operating system. In China, nearly 100% of computers run Windows, including those in the netbars and schools. When I use a computer in the netbar, it always crashes, which often drives me mad - you never know why they do this. I wanted to buy a laptop for myself because I wanted to try Ubuntu so much, and with one I need not go to netbars, but this was impossible. Then I found a new software called VirtualBox, which I think many of you are familiar with. With it, I could run Ubuntu in the computer in the netbar! This is exciting! Almost every weekend I went to the bar to virtualize my Ubuntu. How I wanted a computer for myself! The things above happened in 2010, and, in December of that year, I finally bought my first computer. It has an Atom D510 CPU and 2GB DDR and a 160GB hard disk. In fact it is a netbook. The second day, I went to the netbar and downloaded the latest Ubuntu 10.10 and installed it on full circle magazine #51 my computer. Yes, I installed it by clearing the whole disk which ran Windows 7 when I bought it, which is so slow when I start it. Now I have been using Ubuntu more than 3 months. Everything is going so perfectly! I work with OpenOffice.org (can you help me install LibreOffice? thank you!), watch videos with VLC, and chat with Skype since it allows me to do video chatting. Firefox works very well too, and I also installed Chrome. Almost forget one thing I also installed Macbuntu. I think you must have heard of it? It is so wonderful not only because it looks beautiful, but also you can use “exposeâ€? by moving the mouse to the right corner or left corner when you are dealing with many apps. (I had once used my friend's Mac OS, but the expose in it is much less convenient than Macbuntu, for you must use the keyboard and four fingers to use it, not just the mouse.) Yeah, that's all my wonderful experience with Ubuntu. Nearly all my friends envy my desktop when 27 they see my computer. (They just think it is Mac OS, hahahaha.) I love it so, so, so much!!! I will never leave it and never use that heavy and slow Windows! Now my life has really, really changed. With color everyday, Ubuntu gives me so much pleasure and excitement, although I know little about the Linux world. No, Ubuntu shouldn't be treated as Linux, it is Ubuntu, the way it is, which always gives you new convenience, new choices, and new adventure. Thanks for reading my letter. My English level is just OK since it is not my mother language. (I started learning English in 2003.) I will be very happy if you give me a short reply that says you have spared your time to read my article, no matter if you print it or not. Maybe, I should write this in Chinese and send it directly to the Chinese part, but, after thinking about it, I determined to send it to you since I want you and foreign friends to know my confident and happy experience with Ubuntu. contents ^ MY OPINION H Written by Allan J. Smithie ave you ever sat there, thinking "I wish I could write?" Not "I wish I could write like Joe Schmo or anyone in particular," just "I wish I could write." I think you can. I don't mean the one novel that's in all of us. Believe me, I've read a lot of those. It's not true. But you can write something. An opinion? Opinions are good. We all have them. Done something technical? What about a how-to, or a review? Maybe a poem? A haiku in response to Eric Schmidt? Go ahead, learn a new art-form. Write Something. Write Anything! • Concision • Transparency • Consistency I can give you list upon list of rules for writing most kinds of text. You know the kind of thing I mean: • Put the reader first. • Be clear. • Be specific. • Get to the point. Then stop. • Express one thought at a time. • Use short phrases. • Use short sentences. • Use short paragraphs. • Never use a long word when a short one will do. • Edit thoroughly; cut, cut, cut I know it's difficult. Web readers have a short attention span, and you just can't show off the extent of your vocabulary, given our readership is transnational. That doesn't make it impossible. I mark my own papers these days: 'must try harder'. Why don't you give it a go? Any subject will do. Go ahead, write something. Write anything. I write a lot of words, and I confess I have followed the rules and I have failed. Also, I have thrown away the rules and I have failed. That's why we have editors. I can give you lists of attributes of good writing: • Clarity • Accuracy • Relevance • Sincerity (Remember, if you can fake this, you can fake anything.) full circle magazine #51 28 contents ^ I THINK... Last month's question was: What distro(s) do you use? ... and with those distro(s), which desktop environment(s) do you use? What distro(s) do you use? full circle magazine #51 29 contents ^ I THINK... “ Ronnie says: Even though I purposely didn’t mention Unity, I still ended up with a ton of anti-Unity comments. So, Ignoring the negative comments towards Unity, you thought: “ LXDE is one of the best desktop environments out there. I even use it on my gaming rig with 8 GB of RAM and a hexacore processor. KDE is my choice when I want eye-candy and productivity. “ “ Bodhi for the win! Kubuntu on my desktop & laptop for work on an everyday basis. Gnome for some short time basis usage, and Lubuntu/CrunchBang for low resources machines. “ “ KDE made huge steps but it seems too bloated. I decide what I use, and I decide what apps I need. I need only a tiling wm and terminals + firefox + geany. Nothing more and nothing less. “ Mint KDE gives me the joy of Linux, the beauty and ease of use of KDE, plus a true readyto-use out-of-the-box experience. The fact that they don't release updates before they're READY? Priceless. Ubuntu running Gnome. I also have a couple headless servers but I didn't count those. “ Ubuntu 11.04/Unity for my main home PC. Ubuntu 10.04, no desktop, for media center. Headless Ubuntu 11.04 server for all home server needs. Lubuntu 11.04 for my old netboox Asus eeepc 900. Linux Mint works 'Out of the Box'. Nothing to add or download; just turn it on and use it. “ “ Xfde and Enlightenment FTW! “ Waiting for KDE4- LTS distro supporting my Wacom Bamboo CTH-460 pen tablet. “ Ubuntu rescued my computer that wouldn't run Windows XP any more. Love it! “ “ Ubuntu 11.04 Unity 2D in a laptop, and Ubuntu 8.04 in a Dell MIni 9, and Vista in a PC. “ “ Ubuntu and Gnome to show off, Mint and LXDE for work, and Puppy for troubleshooting. LXDE rocks! Very low RAM use. “ “ OpenSUSE & KDE, Rules!!! Most of the time I use Ubuntu, but in slow machines I use Xubuntu or Lubuntu. “ I really hope KDE will be the next hit. It just keeps getting better and better. “ Lubuntu on my old laptop works great. I prefer it over Xubuntu. At office, Ubuntu 10.04 LTS. “ I will use the Classic Desktop of Ubuntu 11.04 until summer 2012. Till that time the GNOME-Shell and Unit should be cured of childhood diseases. Then I will decide to go forward with the GNOME-Shell, Unity ... or even KDE? “ I am using Unity right now on my Ubuntu machine. But I am thinking about give another chance to Gnome 3 Shell... However, as both of them are sometimes acting funny on Ubuntu, I have also a Fluxbox session, just in case... My other (older) computer is happily running Xubuntu. The question I'd like to pose for FCM#52 is: Would you like to see a series of articles on audio editing with Audacity? To give your answer, go to: I have multiple machines, 1 Fedora running KDE, and 2 full circle magazine #51 30 contents ^ full circle magazine #51 31 contents ^ REVIEW GRAMPS Written by Dave Rowell If you're at all interested in family history or genealogy, you'll need to use some computer program to keep track of the information you'll accumulate. Every time you try to change or update something, you'll find that more than one 'document' will be involved – you'll have to find and update them all! A program will facilitate updating and make sure the update is propagated wherever it’s needed. If you're running Linux as your OS, there's not a lot of choice – install Gramps! Fortunately, Gramps is mature, stable, easy to use, and very capable. Gramps is actively maintained by a dedicated and very responsive group of developers. Let’s see how it stacks up to Family Chronicle's old list of required features: • Data Integrity – Gramps doesn't change or add to the information you supply, as some other programs do. In another meaning of integrity, Gramps uses good database technology to keep your data safe. • Name and Date Recording – Gramps has more than adequate provision for entering names and dates, thanks to an international group of developers and users. Dates are entered in your choice of format and calendar. But remember, the usual date format in genealogical circles is day, month, year. • Place Recording - Gramps has an interesting set of options for entering places. All the necessary fields such as Street, City, County, State, and Country are provided. The database may be sorted on any of them. Additionally, there is provision for entering geographical coordinates. If that is done, Gramps can display the data on a map. • Source Documentation – Ya gotta do it! Gramps provides the tools to document the source for each datum you enter. And for each source, the repository where you found it. I can't stress enough how important it is to source each fact that you enter – as you enter it. In addition, there is some provision for evaluating that source. For my taste, there are so many places to enter sourcing full circle magazine #51 information as to make it confusing - my main complaint. • To-Do Lists – On several screens, Gramps provides for a key indicating whether further work is needed. There is a note type for a more detailed description of work to be done. You'll note that one of the main tabs is “Gramplets”. Open it, and you'll gain access to a bunch of user-developed research aids. 32 TODO was installed by default. Use it – it beats a bunch of sticky notes! • Event Recording – Gramps, as I see it, is an Event-driven program to match our event-driven lives. Enter an event, such as birth, in a person's life, and there is more than adequate provision to document and view it. • Parent Recording – A child may contents ^ REVIEW: GRAMPS be linked to multiple families such as birth family and adopted family. It can handle a case where one parent is a birth parent while the other isn't. Gramps also provides for several types of parent relationship - from married to none. • Multimedia may be linked to a person, event, source - you name it. Do think about organization of your media and whether it will stand the test of time! There is something to be said for a single directory containing items, perhaps severely cropped, to be linked. • Data Sorting and Reporting – Not a problem for Gramps. If sorting isn't enough for you, Gramps allows you to filter the data while you're sorting. There is provision for text reports, graphical reports, and website generation. You can generate the usual ancestor charts. I've made 18” x 24” reports for a family reunion. • Back-up and Data Transfer – Gramps has you covered here too. You can import and export GEDCOM and GeneWeb format files to transfer to and from other programs. This works as well as can be expected, but these file formats don't accommodate the bells and whistles found in other programs. Gramps provides for back-up both with and without multimedia. It works too! Interestingly, you can generate vCalendar and vCard files from your data, a feature I haven't explored. So, as you can see, you're not crippled in any way by using Gramps. We're fortunate to have access to such a fine program. Gramps is in the repositories and can be installed using Synaptic or, easier yet, from the Software Manager. In Linux Mint, use Menu > Software Manager (Ubuntu calls it Software Center) - type 'gramps' into the search window – click on 'Gramps' then 'Install.' Its that easy! Next month I'll show how to get started with Gramps by starting a new database, entering your own information, and how to show your sources. full circle magazine #51 33 contents ^ LETTERS Kindle and Google Earth S everal months ago, I decided that I would like to have some eBooks from Amazon, so I downloaded the Kindle for Windows software, but it wouldn't install with the default version of Wine. After some searching, I found the way to get it to work on my Ubuntu 10.10 was to download the Wine 3 beta version. So far, I have had no problems with this version. sudo add-apt-repository ppa:ubuntu-wine/ppa && sudo apt-get update && sudo aptget install wine1.3 I already had an Amazon account, so, left me with some Every month we like to publish some of the emails we receive. If you would like to submit a letter for publication, compliment or complaint, please email it to: letters@fullcirclemagazine.org. PLEASE NOTE: some letters may be edited for space. oversized text boxes on the screen. This was fixed by downloading the Microsoft True Type fonts: sudo apt-get install msttcorefonts Ronnie replies: Having emailed back and forth with Chris about this, it seems that you have to explicitly choose KDE from the dropdown menu at login. Otherwise you'll be greeted with a blank screen. It seems that Goggle Earth needs them for its display. Brian Cockley KDE Login I 'm going to switch from Unity to KDE after trying out Unity for a while. Just dont like it. But, on another note, I wonder if you have any advice on a KDE question. I can use a Kubuntu live CD (11.04) on my desktop computer and it runs perfectly. Everything boots and I'm able to use the system. But, if I install it to the hard drive, I never get past the screen that shows those 5 icons as the desktop loads up. My system will lock up and need to be restarted. Chris full circle magazine #51 Pint and a Pizza I have to admit, I am getting a little tired of the complaints about Unity, and really don't understand why they don't click a few buttons and run their Ubuntu under Gnome - it's still there you know. I am prepared to give Unity a chance, and already I am using it without thinking. In addition to that, a lot of the kinks will be ironed out by 11.10. If I didn't like Unity, I would be using Gnome and wouldn't be complaining. If I didn't like Ubuntu, I would move quietly to Kubuntu or even another distro. And don't worry about Canonical, the owner (Mark Shuttleworth) wouldn't even 34 Join us on: facebook.com/fullcircle magazine twitter.com/#!/fullcirclemag linkedin.com/company/fullcircle-magazine ubuntuforums.org/forum display.php?f=270 notice the money missing if it folded over night. He sold his four year old company for millions when he was twenty-five and I am sure investments have doubled that by now. So, come on people, less of the whining, and either knuckle down, or move on. Start worring about the important things in life such as Greece, our national debt, and how this will affect the price of a pint and a pizza over the coming year. Ampers contents ^ LETTERS Adding KDE I f/when you get around to writing a Part 2 article about KDE, I implore you, please make a mention on how you can switch from Ubuntu to Kubuntu without losing all your programs - that would make me a very happy bunny indeed! The PPA has the .debs for Lucid, Maverick and Natty. After installing you only need to configure the plugin. So, it's not hard to install, and the documentation is not out-ofdate. Antonio Chiurazzi John Haywood Ronnie says: the easiest way is to install the kubuntu-desktop package. You'll then be able to choose KDE, or Gnome, at login. The only down side of this is that your application menu in both KDE and Gnome will contain both Gnome and KDE applications. No big deal, it just makes your menu a bit full looking. He's Right You Know More PAM I'm a reasonably advanced user not interested in the latest tool or trying things out for the fun of it, but just wanting a stable and consistent platform to use. I try to interest my friends in Ubuntu, especially those who don't want to shell our for the latest Windows, or MS Office, or whatever. T he official PAM website has the proper installation and configure procedure. The right way to install PAM in Ubuntu is described here: I n my opinion, he [FCM#50 My Opinion] is right. I draw parallels between Microsoft and Canonical. Microsoft changes their OS regularly to get more money from people. I assume Canonical has some reason to change things, other than just for the sake of change. full circle magazine #51 But more important than money is time, and most of them would rather pay for stability than to exist on the bleeding-edge where they constantly have to call me for help and advice. And frankly my time is also important I'm not a paid support engineer for Canonical. I have a hard time recommending Ubuntu, because I know Canonical is going to fiddle just for the fun of it, and I'll be stuck with friends wanting their machine repaired. choice of window manager, screen layout, favorite browser, commonly used tools, and so on. My question to Canonical is, 'Is Ubuntu for the hacker, the designer, the advanced user, the "elite", the uber-geek, or is it for the masses?' If the former, then Microsoft's eternal dominance is assured. If the later, quit changing things for the sake of changing things! Thomas I want a single consistent experience from release to release. I don't expect my buttons to wander around, change color, go away or anything else that hinders me from using the computer. I can deal with change, but rather that I don't want to HAVE to reset choices I have already made because some designer at Canonical thinks he knows better than I do. That's Microsoft's way of treating users. If Canonical wants to continue to change the standards, then there should be a single configuration file of choices the user has made which will be read and obeyed by the upgrade process, containing the USER's 35 Under The Weather L ooking for some sunshine I logged onto the Met Office website and saw their desktop widget available for download. It needs Adobe Air 2.5, but the good news is that they cater for Linux users. But, this latest widget needs 1GB of RAM to run! I just upgraded my RAM to 2GB, and to think that half of it would disappear instantaneously sends shudders through my spine. I'll just keep my bookmark to the forecast page. Roy Read contents ^ UBUNTU WOMEN Cheri Francis Inter view Written by Elizabeth Krumbach. EK: What were your biggest takeaways from the summit? full circle magazine #51 36 contents ^ UBUNTU GAMES T Shadowgrounds Written by Ed Hewitt his month, I am continuing my series of reviews on the games included in The Humble Frozenbyte Bundle. Due to the number of games I have queued up to review in upcoming issues of Full Circle, I am bundling two games into this review. Shadowgrounds, and Shadowgrounds Survivor, are both Sci-Fi top-down shooters set on the planet of Ganymede. Both titles focus around an alien invasion of the planet and an attack on the human base. In Shadowgrounds, you play as a engineer, Wesley Tyler, who thought the game has to repair the base, fight off the alien, and escape the planet. In Shadowgrounds Survivor, the story focuses around three playable characters as they try to escape the planet. The story is solid on both titles, which is told through cut scenes and in-game video messages. Shadowgrounds features story items such as Email messages, Documents, and diary entries. All these can be picked up throughout the game, and help to expand the story further. Shadowgrounds is a shooter, but it is unique by not using the traditional first person view. You play the whole game from a top down prospective, which works perfectly well. The keyboard is used to move the character, while the mouse is used to point your gun. It does not take long to get used to the new view, it may even encourage non-fps fans to try this shooter out. element. You will have to be careful how you position your torch, since you could have aliens sneaking up behind you. This really does add to the tension and atmosphere. Shadowgrounds features a varied arsenal of weapons and explosives, which are very satisfying to use. Both the graphics and sound are good in both titles, which really helps to improve the gameplay. While the graphics are dated in Shadowgrounds, with some solid improvements in Shadowgrounds Survivor, the graphics are still good by today’s standards. Its standout graphic effect is the lighting from the torch. A solid soundtrack can be heard throughout, with sound effects which add to the tension and atmosphere of Shadowgrounds. Both titles feature a decent length story campaign, Shadowgrounds boasts co- The missions are focused on completing a series of objectives, and you will be moving around dark buildings, shooting aliens, activating objects and recovering items. Missions are enjoyable, but do become repetitive as most of the objectives are similar. What makes this game enjoyable is shooting aliens and the excellent atmosphere. Most of the game is played in the dark, with the odd flickering light. The torch attached to your gun can display light only in front of you, which adds an excellent tactical and gameplay full circle magazine #51 37 contents ^ UBUNTU GAMES operative play while Shadowgrounds Survivor has a Survival Mode.. Based on the style of game in Shadowgrounds, it is really begging for a Co-Op mode. While Shadowgrounds features this mode, and allows up to 4 players to play though all the missions from the story campaign, the mode has not been thought through at all. Co-Op can be played on only one computer, and not over LAN or Internet. For most people, this makes Co-Op completely pointless. Shadowgrounds Survivor has a far better extra mode, in the form of Survival Mode, which sends hordes of aliens your way, and you have to survive for as long as possible. Shadowgrounds and Shadowgrounds Survivor are both solid shooters, which are slightly different to traditional action titles, thanks to its horror atmosphere and viewpoint. The story is very plain, but plenty of cut scenes and objects can be found throughout the game which help to add some background to the main story. Missions are enjoyable, though repetitive, and bring nothing new to this shooter genre. The graphics and sound are solid throughout, and improve the atmosphere of the game. Co-Op play is really lacking, and should have been a crucial feature of this title. The port of this title to Linux is poor, and requires a good system to run this game well. Overall, it is an enjoyable shooter, but falls short in many areas. Score: 7/10 Good: • Interesting Story, with story items to pick up during game • Solid Gameplay • Good Atmosphere • Enjoyable Survival Mode (Shadowgrounds Survivor) Trailer: MhRedeAOWxE Bad: • Co-Op not fully implemented (Shadowgrounds) • Becomes dull & repetitive over time • Poor performance on Linux Ed Hewitt, aka chewit (when playing games), is a keen PC gamer and sometimes enjoys console gaming. He is also co-host of the Full Circle Podcast! full circle magazine #51 38 contents ^ Q&A If you have Ubuntu-related questions, email them to: questions@fullcirclemagazine.org, and Gord will answer them in a future issue. Please include as much information as you can about your problem. Compiled by Gord Campbell Q I have always edited the grub list, but, in 10.04, if I do, and remove say, four of the repeated entries, they are put back by the system on reboot. A The best way to deal with it is to actually remove the old kernels. At boot up, note the final five characters of the oldest kernel, such as 32-31. Run Synaptic Package Manager, and search for that string. You should get half a screen of packages, with two "linux-headers" items and one "linux-image" being installed. Right-click on each of them, and select "mark for complete removal." Then click on "Apply." After that, open Accessories/Terminal and enter this command: sudo update-grub The grub list should be shorter by two items. Q A and access the files. Have a look at If you want to encrypt just a DebuggingSoundProble single file, it is probably easier to ms. Odds are, your use the Nautilus file manager. sound is muted somewhere. Highlight the file, right-click, and Follow the instructions select "Compress." A window will on this page: pop up. Enter a file name, and I have installed and run select "7z" as the file type. Click on Audio/InstallingLinuxAls Truecrypt, but it doesn't "Other options," and you can aDriverModules or upgrade to seem to encrypt files. specify a password, and select Ubuntu 11.04. "Encrypt the file list too." Click "Create," and you are done. When you run I recently upgraded to Truecrypt, it can create Ubuntu 11.04, and the a file which is a Let's say I've got one new scroll bars drive me Truecrypt "volume," and .avi video with a film with crazy. How can I get the unmount or mount Truecrypt its original audio fat scroll bars back? volumes. Then you can paste files language (e.g. English) in and out of the volume. When and another .avi with the same Run Synaptic Package you mount a volume, you have to film but with a second audio Manager, Search for provide the password you used to language (e.g. re-dubbed in liboverlay and remove create it, and then the contents Italian). Is it possible to get only it. After a reboot, you are displayed as openly as if they one .avi video with the film and should have fat scroll bars. were in a regular folder. However, the two audio languages you can upload a volume to an selectable? online storage site, and be I have just installed confident that the contents are This command will do it: ubuntu 11.04 on my safe from casual browsing. (If the laptop. Everything is American NSA wants to see your great, but I don't get any files, all bets are off.) Then a buddy sound. can download the volume, provide the password you have given him, I can't seem to get my sound running on my Compaq Presario CQ56 with Ubuntu 10.04. A Q A Q A Q Q A full circle magazine #51 39 contents ^ Q&A ffmpeg -i input -vcodec copy -acodec copy output.mkv newaudio -i input2 -acodec copy Q I installed Ubuntu 11.04 on my netbook, leaving some unused space on the hard drive. I got it all set up the way I liked it, then I installed Android-x86 2.2 in the unused space. Now the boot menu includes only Android. A (Thanks to Garvinrick4 in the Ubuntu Forums.) Boot from a LiveUSB flash drive, or a LiveCD if your computer has a CD drive. Open Accessories/Terminal and enter these commands: sudo fdisk -l (Enter your password when prompted. It should show you the storage devices, and your hard drive is probably /dev/sda. If it's not, modify the next two commands.) sudo mount /dev/sda1 /mnt sudo grub-install --rootdirectory=/mnt /dev/sda sudo umount /mnt sudo reboot Q I installed Google Chromium browser, didn't like it, so I removed it. Now when I click on a link in Evolution 2.30.3, I get a dialogue box telling me: Could not open link. Failed to execute child process "/usr/bin/chromium-browser" (no such file or directory) A Start Firefox. Click on Edit/Preferences. Select the Advanced tab. Near the bottom, is "Always check to see if Firefox is the default browser on startup". Click the "Check Now" button. Select "yes." Q I am an accountant. My children need to get into my computer to do homework, but I want to block spreadsheet programs to protect my work content. A You can't block them from running programs, but you can block them from being able to full circle magazine #51 access your files. Set up a nonadmin userid for the kids, then in a terminal run: chmod 750 /home/yourusername Make sure you have a strong password they don't know! Q A I found it. It's in the nVidia settings of all places. I want to know how to preview sites in progress locally while using Kompozer. A Install LAMP (in addition to Linux: Apache, Mysql and PHP), a full web server. The location of the site is /var/www. Copy your php and html files from 40 Tips and Techniques How hot, version 2.0 My brother was messing with something to do with the look and feel of Unity, and now I have a crazy drop shadow on my mouse cursor. I hate where the drop shadow is placed and want it back to its default position. Q Kompozer (javascript+images+other web content) there. In the browser you can access it via 'localhost' or 127.0.0.1. I n Issue 43, I revealed one of my hang-ups: I want to know how hot things are, dammit! Then Unity arrived (for me, as a testing environment, not my production system), and applets were gone, apparently. Conky to the rescue! Easier said than done. If someone would like to propose a "Top 5 Conky Tutorials," I think it would be a great addition to Full Circle Magazine. It was easy to find instructions on how to change the border and colour, but I honestly don't give a rat's patootie about those things. Eventually, Google led me to some useful information, but it wasn't easy. I also grabbed the official Conky manual and pasted it into a text file for offline perusal. I installed lm-sensors before I contents ^ Q&A got into this. I have included my .conkyrc file. Everything up to the word "TEXT" was simply cribbed from some web site, and seemed to be OK. Displaying the uptime and the kernel version were from the same source, and they struck me as OK. Then we get into the meat of the matter. "Hwmon temp 1" turned out to be the chipset temperature, mostly based on trial and error. I had hoped there would be other "temp" variables, but it was not to be. I installed a proprietary driver, and apparently that was all I needed for "nvidia temp" to work. For the CPU temperature, a whole lot of piping was needed. "Sensors" is part of the lm-sensors package. "Cut" extracts just the desired information, and "sed" formats it. their web sites, but we're not actually going to do that.) Again, I piped the output from conkyForecast into Cut, because the raw output included an ugly "A" with an accent. The other items in my conky config are pretty boring: CPU frequency and utilization, memory and swap usage, and disk space usage. The full text for the Conky discussed here (and shown below) can be seen at: My temperature fetish is not limited to my computer, I also want to know the temperature outside. If you Google conkyforecast, you will find a place to download what turns out to be "Hddtemp" is the temperature a repository name, which is added reported by the hard drive. To get to Synaptic or Software Center. this, you must install hddtemp, Then you can install the actual and run it as a daemon: conkyforecast program. Also sign hddtemp -d /dev/sda (or the up as a partner at weather.com, if name of your hard drive) you are in North America. (Sorry, I have no idea what to do if you are I have an Nvidia video card, and outside North America. Letters please!) You will get a partner id and licence key. .conkyForecast.config Create a .conkyForecast.config file CACHE_FOLDERPATH = /tmp/ in your root folder, CONNECTION_TIMEOUT = 5 following the pattern I EXPIRY_MINUTES = 30 TIME_FORMAT = %H:%M have included, but with DATE_FORMAT = %Y-%m-%d your own partner id, XOAP_PARTNER_ID = XXXXXXXXX licence key, and default XOAP_LICENCE_KEY = YYYYYYYYYYY location. (Partners can DEFAULT_LOCATION = CAXX0504 display the weather on full circle magazine #51 41 contents ^ MY DESKTOP Your chance to show the world your desktop or PC. Email your screenshots and photos to: misc@fullcirclemagazine.org and include a brief paragraph about your desktop, your PC's specs and any other interesting tidbits about your setup. I took this screenshot before upgrading to 11.04 because I don't know what will happen with my existing desktop configuration since 11.04 will use Unity by default. I took this picture some time ago. It's not really my idea of a perfect desktop, but I had fun doing it. Software used: emerald theme manager, compiz, AWN, screenlets (with quick folder applets intalled), and, not to be forgotten, Cairo Dock, all running on Ubuntu 10.10, with GNOME 2.32. The dock at the bottom of the screen is AWN, and I have various screenlets on the right side of the screen, and 2 sticker screenlets toward the left side of the screen - to try and make them look as though they are hanging on the wall. Cairo Dock is heavy, and you need to have a good computer running it or you will end up having a turtle system. My computer is a Toshiba Satellite L655-S5157, with an Intel i3 processor, 4 GB of RAM (even though I'm running 32-bit Ubuntu to help ensure that device drivers work with my hardware - I still had to do some tweaking to get the wireless adapter to work and the headphones jack to work), and I upgraded to a 750 GB hard drive. Computer hardware specification: Compaq presario CQ42 203AU, RAM 3GB, graphic card ATI radeon 1GB, hard disk 320GB. Ihsan Jaffar Scott M. Keeth full circle magazine #51 42 contents ^ MY DESKTOP Apropos of "The Netbook is Not Dead Yet" by Allan J. Smithie in issue #49 of Full Circle, my desktop is running on an Asus EEE-PC 900E, and it runs wonderful on it. Listening to music using either Audacious (right hand corner) or Banshee Media Player, the default player, is just perfect. The wallpaper is from: This is my Ubuntu 10.10 desktop. I like this setup because of its simplicity and yet classic and sophisticated feel. I downloaded the wallpaper from:, used the theme Dust, adjusted the screen resolution to 1280 x 1024, and set the panels to transparent (right-click > properties...). Below that is the Cairo dock. My PC specs are: I'm always on deviantart looking for Ubuntu wallpapers wonderful site for it. The skin is elementary, and the icon is Faenza-Dark set. • Dual-core 2.5 GHz • 2GB RAM • Integrated Graphics 256 MB Ramon Barros Eyob Fitwi full circle magazine #51 43 contents ^ TOP 5 VOIP Clients Written by Andrew Min Ekiga QuteCom QuteCom (formerly known as WengoPhone and run by the French VoIP service Wengo) is another highly popular SIP client. Like Ekiga, it supports voice and video chatting. Where it excels is its support for third party protocols. Its developers have implemented support for the libpurple library, the library powering the popular cross-platform program Pidgin. As a result, QuteCom users can chat with MSN, AIM, ICQ Yahoo, Jabber, Facebook, MySpace, and Skype users (though support for Skype is buggy and has questionable legal ramifications). Ekiga, originally written by Damien Sandras as a master’s thesis, is one of the most popular open source softphones, particularly after the acquisition and subsequent shutdown of Gizmo5 by Google. One of the main reasons is its ease of use. Most VoIP clients, for example, require you to sign up with an external service. While you can use a third party server, Ekiga offers a built-in service, helping new users feel right at home. But don’t be fooled into thinking Ekiga is only for the novice user; the application sports many advanced features, including support for a laundry list of codecs and LDAP address-book lookup. Users thinking about moving to Ekiga full-time will also be happy to discover Ekiga Call Out, which allows users to call “real” phone numbers for cheap rates. To install QuteCom, use the qutecom package in the universe repositories. To install Ekiga, use the ekiga package in the universe repositories. full circle magazine #51 44 contents ^ TOP 5 - VOIP CLIENTS Linphone Twinkle If you want a slightly more configurable SIP client (with a much less user-friendly interface), check out Linphone. It has a bevy of advanced configuration settings, including IPv6/IPv4 switching, manual RTP/UDP ports, maximum transmission unit configuration, and so on. Additionally, it’s cross-platform - you can use the app on Android, Blackberry, or your iPhone, a nice feature if you want a uniform interface. Finally, for you terminal junkies, there’s a built-in command line interface. Twinkle has always been my favorite KDE SIP client. To start, it’s incredibly user friendly. Its wizard interface for setting up accounts includes built-in support for FreeWorld Dialup, sipgate, SIPPhone (though SIPPhone, run by Gizmo5, is currently defunct), and Diamondcard, which lets you make calls to landlines and other “real” phones. There’s also lots of KDE integration; of particular use is the KAddressBook integration (though you can use the built-in address book if you don’t use KDE). Finally, for the scripters and coders of the world, Twinkle offers event scripts. You can configure various Bash scripts to execute when certain events (incoming call, outgoing call, call released, etc.) are triggered. To install Linphone, use the Ubuntu package at the official download page. full circle magazine #51 45 contents ^ TOP 5 - VOIP CLIENTS Skype Homepage: No list of VoIP clients would be complete without Skype, the grandfather of softphones, recently acquired by Microsoft for $8.5 billion. Unfortunately, even before the Microsoft acquisition, Skype’s Linux support lagged. While both Windows and Mac users have access to 5.x builds, Linux users are forced to use 2.2. That means quite a few features, including group video, are missing. You’ll also be stuck with a slightly dated interface - though, if you’ve seen the more recent iterations of the Windows interface, that might not be a bad thing. Most unfortunate of all, Skype uses its own proprietary protocol - you have to have a Skype account, and you can’t officially use any third party clients to connect to it. Skype, download the .deb package for Ubuntu from the official homepage. Because the show is produced by the Ubuntu UK community, the podcast is covered by the Ubuntu Code of Conduct and is therefore suitable for all ages. Top 5 - THE END Unfortunately Andrew no longer has the time to continue writing the Top 5 and is leaving FCM. It's been a joy to work with him for the past four years and I hope you'll all join me in wishing him the best of luck in his endeavours. full circle magazine #51 Available in MP3/OGG format in Miro or iTunes, or listen to it directly on the site. 46 #52: Sunday 07th August 2011. Release date for issue #52: Friday 26th August #51 47 contents ^ Full Circle is a free, independent, monthly magazine dedicated to the Ubuntu family of Linux operating systems. Each month, it contains help... Published on Sep 5, 2012 Full Circle is a free, independent, monthly magazine dedicated to the Ubuntu family of Linux operating systems. Each month, it contains help...
https://issuu.com/fullcirclemagazine/docs/120906162746-285b5fca89a04d9d9bbba69f9844a62c
CC-MAIN-2018-22
refinedweb
16,662
66.23
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++11 status. Section: 21.6 [container.adaptors] Status: C++11 Submitter: Pablo Halpern Opened: 2009-08-26 Last modified: 2017-02-03 Priority: Not Prioritized View all other issues in [container.adaptors]. View all issues with C++11 status. Discussion: Under 21.6 [container.adaptors] of N2914 the member function of swap of queue and stack call: swap(c, q.c); But under 21.6 [container.adaptors] of N2723 these members are specified to call: c.swap(q.c); Neither draft specifies the semantics of member swap for priority_queue though it is declared. Although the distinction between member swap and non-member swap is not important when these adaptors are adapting standard containers, it may be important for user-defined containers. We (Pablo and Howard) feel that it is more likely for a user-defined container to support a namespace scope swap than a member swap, and therefore these adaptors should use the container's namespace scope swap. [ 2009-09-30 Daniel adds: ] The outcome of this issue should be considered with the outcome of 774 both in style and in content (e.g. 774 bullet 9 suggests to define the semantic of void priority_queue::swap(priority_queue&) in terms of the member swap of the container). [ 2010-03-28 Daniel update to diff against N3092. ] [ 2010 Rapperswil: ] Preference to move the wording into normative text, rather than inline function definitions in the class synopsis. Move to Tenatively Ready. [ Adopted at 2010-11 Batavia ] Proposed resolution: Change 21.6.4.1 [queue.defn]: template <class T, class Container = deque<T> > class queue { ... void swap(queue& q) { c.swap(q.c); } ... }; Change 21.6.5 [priority.queue]: template <class T, class Container = vector<T>, class Compare = less<typename Container::value_type> > class priority_queue { ... void swap(priority_queue& ) ;... }; Change 21.6.6.1 [stack.defn]: template <class T, class Container = deque<T> > class stack { ... void swap(stack& s) { c.swap(s.c); } ... };
https://cplusplus.github.io/LWG/issue1198
CC-MAIN-2018-47
refinedweb
341
57.67
Difference between revisions of "Hello XML World Example (Buckminster)" From Eclipsepedia Revision as of 12:00, 9 October 2006 < To: Buckminster Project This examples shows several Buckminster features in action. To run the example, use the File > Open three lines are the usual XML incantations - this is XML, and this is the syntax of the XML - i.e. CQuery-1.0 and the namespace is called "cq". - Next line declares where the resource map to use when resolving components is found (we will look at the resource map next). - Next line states that the wanted (root) component is called org.demo.hello.xml.world, that.
http://wiki.eclipse.org/index.php?title=Hello_XML_World_Example_(Buckminster)&diff=13338&oldid=13337
CC-MAIN-2013-48
refinedweb
105
60.24
Created on 2010-05-28 00:12 by terry.reedy, last changed 2010-05-29 16:47 by pakal. Some of my tests use io.StringIO and assert that captured print output equals expected output. Until now, I reused one buffer by truncating between tests. I recently replaced 3.1.1 with 3.1.2 (WinXP) and the *second* test of each run started failing. The following minimal code shows the problem (and should suggest a new unit test): from io import StringIO; s = StringIO(); print(repr(s.getvalue())) print('abc', file=s); print(repr(s.getvalue())) s.truncate(0); print(repr(s.getvalue())) print('abc', file=s); print(repr(s.getvalue())) prints (both command window and IDLE) '' 'abc\n' '' '\x00\x00\x00\x00abc\n' # should be and previously would have been 'abc\n' s.truncate(0) zeros the buffer and appears to set the length to 0, but a subsequent print sees the length as what it was before the truncate and appends after the zeroed characters. Ugh. I presume the problem is StringIO-emulation specific but have not tested 'real' files to be sure. --- also... >>> help(s.truncate) Help on built-in function truncate: truncate(...) Truncate size to pos. ... should be, for greater clarity, something like truncate([pos]) Truncate the size of the file or buffer to pos ... This was an exceptional API change in 3.1.2: truncate() doesn't move the file pointer anymore, you have to do it yourself (with seek(0) in your case). I'm sorry for the inconvenience; the change was motivated by the desire of having an API more consistent with other file-handling APIs out there. This should not have been closed yet. The announced policy is that bugfix releases should not add or change APIs. I think this hidden change (there is no What' New in 3.1.2 doc) should be reverted in 3.1.3. I will post on py-dev for other opinions. That aside, I think both the current behavior and docs are buggy and should be changed for 3.2 (and 3.1.3 if not reverted). 1. If the file pointer is not moved, then it seems to me that line 3 of my example output should have been '\0\0\0\0' instead of ''. The current behavior is '' + 'abc\n' == '\0\0\0\0abc\n', which is not sane. Maybe .getvalue() needs to be changed. It is hard to further critique the observed behavior since the intent is, to me, essentially undocumented. 2. The current 3.1.2/3.2a0 manual entry "truncate(size=None) Truncate the file to at most size bytes. size defaults to the current file position, as returned by tell(). Note that the current file position isn’t changed; if you want to change it to the new end of file, you have to seek() explicitly." has several problems. a. 'file' should be changed to 'stream' to be consistent with other entries. b. If "truncate the file to at most size bytes" does not mean 'change the steam position', then I have no idea what it is supposed to mean, or what .truncate is actually supposed to do. c. There is no mention that what is does do is to replace existing chars with null chars. (And the effect of that is/should be different in Python than in C.) d. There is no mention of the return value and what *it* is supposed to mean. 3. The current 3.1.2 (and I presume, 3.2a0) doc string (help entry) "truncate(...) Truncate size to pos. The pos argument defaults to the current file position, as returned by tell(). The current file position is unchanged. Returns the new absolute position." also has problems. a. Same change of 'file' to 'stream'. b. I already commented on ... and 'truncate size to pos', but to be consistent with the manual, the arg should be called 'size' rather that 'pos', or vice verse. c. 'truncate' does not define the meaning of 'truncate', especially when it no longer means what a native English speaker would expect it to mean. d. To me, 'the *new* absolute position' contradicts 'The current file position is *unchanged*' [emphases added]. Is there some subtle, undocumented, distinction between 'absolute position' and 'file [stream] position'? In any case, .truncate(0) returns 0, which seems to become the new position for .getvalue() but not for appending chars with print. To me, having *two* steams positions for different functions is definitely a bug. 4. There is no mention of a change to .truncate in What's New in Python 3.2. After searching more, I see that the change was discussed in #6939, by only two people. I see no justification there for changing 3.1 instead of waiting for 3.2. The OP suggested in his initial message, as I do here, that the doc say something about what .truncate does do with respect to padding, but that did not happen. For the record, Guido's decision to change 3.1: The change was announced in, but indeed it wasn't advertised in py3k changes - my apologies, I didn't check it was. I agree that the doc should be clarified on several aspects. * The returned value is the new file SIZE indeed (I guess we can still use "file" here, since imo other streams can't be truncated anyway). * Truncate() simply changes the current end-of-file (the word is historical, resize() would have been better - as this has been discussed on mailing lists). * Extending the file size with truncate() or with a write() after end-of-file (that's your sample's case) does, or does not (depending on the platform), fill the empty space with zeroes. Proposal for doc update : Resizes the file to the given size (or the current position), without moving the file pointer. This resizing can extend or reduce the current file size. In case of extension, the content of the new file area depends on the platform (on most systems, additional bytes are zero-filled, on win32 they're undetermined). Returns the new file size. Would it be ok thus ? How about reusing the documentation of legacy file objects: .” I've committed a doc update (a mix of the legacy truncate() doc and Pascal's proposal) in r81594. Good B-) Woudl it be necessary to update the docstrings too ?
http://bugs.python.org/issue8840
crawl-003
refinedweb
1,065
76.42
Work motivation While working for various clients that needed fast binary serialization, we had discovered that the binary and cereal packages are both inefficient and so we created the store package. In the high-frequency trading sector, we had to decode and encode a binary protocol into Haskell data structures for analysis. During this process it was made apparent to us that while we had been attentive to micro-benchmark with the venerable criterion package, we hadn't put a lot of work into ensuring that memory usage was well studied. Bringing down allocations (and thus work, and garbage collection) was key to achieving reasonable speed. Let's measure space In response, let's measure space more, in an automatic way. The currently available way to do this is by compiling with profiling enabled and adding call centers and then running our program with RTS options. For example, we write a program with an SCC call center, like this: main :: IO () main = do let !_ = {-# SCC myfunction_10 #-} myfunction 10 return () Then compile with profiling enabled with -p and run with +RTS -P and we get an output like this: COST CENTRE MODULE no. entries ... bytes MAIN MAIN 43 0 ... 760 CAF:main1 Main 85 0 ... 0 main Main 86 1 ... 0 myfunction_10 Main 87 1 ... 160 (Information omitted with ... to save space.) That's great, exactly the kind of information we'd like to get. But we want it in a more concise, programmatic fashion. On a test suite level. Announcing weigh To serve this purpose, I've written the weigh package, which seeks to automate the measuring of memory usage of programs, in the same way that criterion does for timing of programs. It doesn't promise perfect measurement and comes with a grain of salt, but it's reproducible. Unlike timing, allocation is generally reliable provided you use something like stack to pin the GHC version and packages, so you can also make a test suite out of it. How it works There is a simple DSL, like hspec, for writing out your tests. It looks like this: import Weigh main = mainWith (do func "integers count 0" count 0 func "integers count 1" count 1 func "integers count 2" count 2 func "integers count 3" count 3 func "integers count 10" count 10 func "integers count 100" count 100) where count :: Integer -> () count 0 = () count a = count (a - 1) This example weighs the function count, which counts down to zero. We want to measure the bytes allocated to perform the action. The output is: Case Bytes GCs Check integers count 0 0 0 OK integers count 1 32 0 OK integers count 2 64 0 OK integers count 3 96 0 OK integers count 10 320 0 OK integers count 100 3,200 0 OK Weee! We can now go around weighing everything! I encourage you to do that. Even Haskell newbies can make use of this to get a vague idea of how costly their code (or libraries they're using) is. Real-world use-case: store I wrote a few tests, while developing weigh, for the store package: encoding of lists, vectors and storable vectors. Here's the criterion result for encoding a regular Vector type: benchmarking encode/1kb normal (Vector Int32)/store time 3.454 μs (3.418 μs .. 3.484 μs) benchmarking encode/1kb normal (Vector Int32)/cereal time 19.56 μs (19.34 μs .. 19.79 μs) benchmarking encode/10kb normal (Vector Int32)/store time 33.09 μs (32.73 μs .. 33.57 μs) benchmarking encode/10kb normal (Vector Int32)/cereal time 202.7 μs (201.2 μs .. 204.6 μs) store is 6x faster than cereal at encoding Int32 vectors. Great! Our job is done, we've overcome previous limitations of binary encoding speed. Let's take a look at how heavy this process is. Weighing the program on 1 million and 10 million elements yields: 1,000,000 Boxed Vector Int Encode: Store 88,008,584 140 OK 1,000,000 Boxed Vector Int Encode: Cereal 600,238,200 1,118 OK 10,000,000 Boxed Vector Int Encode: Store 880,078,896 1,384 OK 10,000,000 Boxed Vector Int Encode: Cereal 6,002,099,680 11,168 OK store is 6.8x more memory efficient than cereal. Excellent. But is our job really finished? Take a look at those allocations. To simply allocate a vector of that size, it's: 1,000,000 Boxed Vector Int Allocate 8,007,936 1 OK 10,000,000 Boxed Vector Int Allocate 80,078,248 1 OK While store is more efficient than cereal, how are we allocating 11x the amount of space necessary? We looked into this in the codebase, it turned out more inlining was needed. After comprehensively applying the INLINE pragma to key methods and functions, the memory was brought down to: 1,000,000 Boxed Vector Int Allocate 8,007,936 1 OK 1,000,000 Boxed Vector Int Encode: Store 16,008,568 2 OK 1,000,000 Boxed Vector Int Encode: Cereal 600,238,200 1,118 OK 10,000,000 Boxed Vector Int Allocate 80,078,248 1 OK 10,000,000 Boxed Vector Int Encode: Store 160,078,880 2 OK 10,000,000 Boxed Vector Int Encode: Cereal 6,002,099,680 11,168 OK Now, store takes an additional 8MB to encode an 8MB vector, 80MB for an 80MB buffer. That's perfect 1:1 memory usage! Let's check out the new speed without these allocations: benchmarking encode/1kb normal (Vector Int32)/store time 848.4 ns (831.6 ns .. 868.6 ns) benchmarking encode/1kb normal (Vector Int32)/cereal time 20.80 μs (20.33 μs .. 21.20 μs) benchmarking encode/10kb normal (Vector Int32)/store time 7.708 μs (7.606 μs .. 7.822 μs) benchmarking encode/10kb normal (Vector Int32)/cereal time 207.4 μs (204.9 μs .. 210.3 μs) store is 4x faster than previously! store is also now 20x faster than cereal at encoding a vector of ints. Containers vs unordered-containers Another quick example, the Map structures from the two containers packages. Let's weigh how heavy fromList is on 1 million elements. For fun, the keys are randomly generated rather than ordered. We force the list completely ahead of time, because we just want to see the allocations by the library, not our input list. fromlists :: Weigh () fromlists = do let !elems = force (zip (randoms (mkStdGen 0) :: [Int]) [1 :: Int .. 1000000]) func "Data.Map.Strict.fromList (1 million)" Data.Map.Strict.fromList elems func "Data.Map.Lazy.fromList (1 million)" Data.Map.Lazy.fromList elems func "Data.IntMap.Strict.fromList (1 million)" Data.IntMap.Strict.fromList elems func "Data.IntMap.Lazy.fromList (1 million)" Data.IntMap.Lazy.fromList elems func "Data.HashMap.Strict.fromList (1 million)" Data.HashMap.Strict.fromList elems func "Data.HashMap.Lazy.fromList (1 million)" Data.HashMap.Lazy.fromList elems We clearly see that IntMap from containers is about 1.3x more memory efficient than the generic Ord-based Map. However, HashMap wipes the floor with both of them (for Int, at least), using 6.3x less memory than Map and 4.8x less memory than IntMap: Data.Map.Strict.fromList (1 million) 1,016,187,152 1,942 OK Data.Map.Lazy.fromList (1 million) 1,016,187,152 1,942 OK Data.IntMap.Strict.fromList (1 million) 776,852,648 1,489 OK Data.IntMap.Lazy.fromList (1 million) 776,852,648 1,489 OK Data.HashMap.Strict.fromList (1 million) 161,155,384 314 OK Data.HashMap.Lazy.fromList (1 million) 161,155,384 314 OK This is just a trivial few lines of code to generate this result, as you see above. Caveat But beware: it's not going to be obvious exactly where allocations are coming from in the computation (if you need to know that, use the profiler). It's better to consider a computation holistically: this is how much was allocated to produce this result. Analysis at finer granularity is likely to be guess-work (beyond even what's available in profiling). For the brave, let's study some examples of that. Interpreting the results: Integer Notice that in the table we generated, there is a rather odd increase of allocations: Case Bytes GCs Check integers count 0 0 0 OK integers count 1 32 0 OK integers count 2 64 0 OK integers count 3 96 0 OK integers count 10 320 0 OK integers count 100 3,200 0 OK What's the explanation for those bytes in each iteration? Refreshing our memory: The space taken up by a "small" Integer is two machine words. On 64-bit that's 16 bytes. Integer is defined like this: data Integer = S# Int# -- small integers | J# Int# ByteArray# -- large integers For the rest, we'd expect only 16 bytes per iteration, but we're seeing more than that. Why? Let's look at the Core for count: Main.main48 = __integer 0 Main.main41 = __integer 1 Rec { Main.main_count [Occ=LoopBreaker] :: Integer -> () [GblId, Arity=1, Str=DmdType <S,U>] Main.main_count = \ (ds_d4Am :: Integer) -> case eqInteger# ds_d4Am Main.main48 of wild_a4Fq { __DEFAULT -> case ghc-prim-0.4.0.0:GHC.Prim.tagToEnum# @ Bool wild_a4Fq of _ [Occ=Dead] { False -> Main.main_count (minusInteger ds_d4Am Main.main41); True -> ghc-prim-0.4.0.0:GHC.Tuple.() } } end Rec } The eqInteger# function is a pretend-primop, which apparently combines with tagToEnum# and is optimized away at the code generation phase. This should lead to an unboxed comparison of something like Int#, which should not allocate. This leaves only the addition operation, which should allocate one new 16-byte Integer. So where are those additional 16 bytes from? The implementation of minusInteger for Integer types is actually implemented as x + -y: -- TODO -- | Subtract two 'Integer's from each other. minusInteger :: Integer -> Integer -> Integer minusInteger x y = inline plusInteger x (inline negateInteger y) This means we're allocating one more Integer. That explains the additional 16 bytes! There's a TODO there. I guess someone implemented negateInteger and plusInteger (which is non-trivial) and had enough. If we implement a second function count' that takes this into account, import Weigh main :: IO () main = mainWith (do func "integers count 0" count 0 func "integers count 1" count 1 func "integers count 2" count 2 func "integers count 3" count 3 func "integers count' 0" count' 0 func "integers count' 1" count' 1 func "integers count' 2" count' 2 func "integers count' 3" count' 3) where count :: Integer -> () count 0 = () count a = count (a - 1) count' :: Integer -> () count' 0 = () count' a = count' (a + (-1)) we get more reasonable allocations: Case Bytes GCs Check integers count 0 0 0 OK integers count 1 32 0 OK integers count 2 64 0 OK integers count 3 96 0 OK integers count' 0 0 0 OK integers count' 1 16 0 OK integers count' 2 32 0 OK integers count' 3 48 0 OK It turns out that count' is 20% faster (from criterion benchmarks), but realistically, if speed matters, we'd be using Int, which is practically 1000x faster. What did we learn? Even something as simple as Integer subtraction doesn't behave as you would naively expect. Considering a different type: Int Comparatively, let's look at Int: import Weigh main = mainWith (do func "int count 0" count 0 func "int count 1" count 1 func "int count 10" count 10 func "int count 100" count 100) where count :: Int -> () count 0 = () count a = count (a - 1) The output is: Case Bytes GCs Check ints count 1 0 0 OK ints count 10 0 0 OK ints count 1000000 0 0 OK It allocates zero bytes. Why? Let's take a look at the Core: Rec { Main.$wcount1 [InlPrag=[0], Occ=LoopBreaker] :: ghc-prim-0.4.0.0:GHC.Prim.Int# -> () [GblId, Arity=1, Caf=NoCafRefs, Str=DmdType <S,1*U>] Main.$wcount1 = \ (ww_s57C :: ghc-prim-0.4.0.0:GHC.Prim.Int#) -> case ww_s57C of ds_X4Gu { __DEFAULT -> Main.$wcount1 (ghc-prim-0.4.0.0:GHC.Prim.-# ds_X4Gu 1); 0 -> ghc-prim-0.4.0.0:GHC.Tuple.() } end Rec } It's clear that GHC is able to optimize this tight loop, and unbox the Int into an Int#, which can be put into a register rather than being allocated by the GHC runtime allocator to be freed later. The lesson is not to take for granted that everything has a 1:1 memory mapping at runtime with your source, and to take each case in context. Data structures Finally, from our contrived examples we can take a look at user-defined data types and observe some of the optimizations that GHC does for memory. Let's demonstrate that unpacking a data structure yields less memory. Here is a data type that contains an Int: data HasInt = HasInt !Int deriving (Generic) instance NFData HasInt Here are two identical data types which use HasInt, but the first simply uses HasInt, and the latter unpacks it. data HasPacked = HasPacked HasInt deriving (Generic) instance NFData HasPacked data HasUnpacked = HasUnpacked {-# UNPACK #-} !HasInt deriving (Generic) instance NFData HasUnpacked We can measure the difference by weighing them like this: -- | Weigh: packing vs no packing. packing :: Weigh () packing = do func "\\x -> HasInt x" (\x -> HasInt x) 5 func "\\x -> HasPacked (HasInt x)" (\x -> HasPacked (HasInt x)) 5 func "\\x -> HasUnpacked (HasInt x)" (\x -> HasUnpacked (HasInt x)) 5 The output is: \x -> HasInt x 16 0 OK \x -> HasPacked (HasInt x) 32 0 OK \x -> HasUnpacked (HasInt x) 16 0 OK Voilà! Here we've demonstrated that: HasInt xconsists of the 8 byte header for the constructor, and 8 bytes for the Int. HasPackedhas 8 bytes for the constructor, 8 bytes for the first slot, then another 8 bytes for the HasIntconstructor, finally 8 bytes for the Intitself. HasUnpackedonly allocates 8 bytes for the header, and 8 bytes for the Int. GHC did what we wanted! Summary We've looked at: - What lead to this package. - Propose that we start measuring our functions more, especially libraries. - How to use this package. - Some of our use-cases at FP Complete. - Caveats. - Some contrived examples do not lead to obvious explanations. Now I encourage you to try it out!
https://www.fpcomplete.com/blog/2016/05/weigh-package
CC-MAIN-2020-05
refinedweb
2,387
64.81
I bought a used copy of Learning to Program in C++ by Steve Heller. It ususally comes with DJGPP Compiler, but mine was used and didn't have it. I just use Borland C++ 5.0 Anyways one of the examples in the book has #include "string6.h" my guess is for the part in the code that says string answer; Anyways when I compile the program I get Unable to open include file 'string6.h' Undefind symbol 'string' I'm guessing that Borland 5 doesn't use the header string6.h and only DJGPP does, but I can't find what the substitute would be for Borland so I can make this program work. Please help. Here is the whole source if needed __________________________________________________ __ #include <iostream.h> #include "string6.h" int main() { string answer; cout << "Please respong to the following statement "; cout << "with either true or false\n"; cout << "Susan is the world's most tenacious novice.\n"; cin >> answer; if(answer !="true") if(answer !="false") cout << "Please answer with either true or false."; if(answer =="true") cout << "Your answer is correct\n"; if(answer=="false") cout << "Your answer is erroneous\n"; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/16599-string-h-help-printable-thread.html
CC-MAIN-2016-40
refinedweb
196
83.15
kalzium/libscience #include <psetables.h> Detailed Description Holds all periodic system tables and make them accessible. Provides functions to easyli create pse tables with qGridLayouts or qGraphicsView. creating a table for the gridlayout position elements in a qGraphicsScene getting the position of the Numerations for the periodic system of elements (j) Provides shape and elements of diferent peridic tables of elements Definition at line 68 of file psetables.h. Constructor & Destructor Documentation Definition at line 37 of file psetables.cpp. Member Function Documentation Returns the KalziumTableType with the id specified. It will gives 0 if none found. Definition at line 57 of file psetables.cpp. Returns the KalziumTableType whose name is the id specified. It will gives 0 if none found. Definition at line 66 of file psetables.cpp. Definition at line 41 of file psetables.cpp. Returns a list with the names of the table types we support. Definition at line 47 of file psetables.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2020 The KDE developers. Generated on Fri Jan 17 2020 03:26:43 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/stable/kdeedu-apidocs/kalzium/libscience/html/classpseTables.html
CC-MAIN-2020-05
refinedweb
203
51.75
D-Bus users Below is a list of projects using D-Bus. It is not complete so if you know of a project which should be added please just edit the wiki. (Or send mail to the mailing list and see if someone has time to do it for you.) The list also includes the bus names owned by the projects' software. This is to help avoid namespace clashes as it is important that no two projects use the same bus name. Not all D-Bus usages require owning a bus name, of course. Be sure to namespace your bus name in com.example.?ReverseDomainStyle as well as listing it here. Finally, the API column shows a code indicating which of the various D-Bus APIs has been used. These are defined as follows: - * D - the raw D-BUS library * G - the GLib bindings * Q - the Qt bindings * P - the Python bindings * M - the Mono/.NET bindings
https://freedesktop.org/wiki/Software/DbusProjects/?action=PackagePages
CC-MAIN-2016-40
refinedweb
157
81.73
Convert a Ruby Method to a Lambda Convert a method to a lambda in Ruby: lambda(&method(:events_path)). OR JUST USE JAVASCRIPT. It might not be clear what I was talking about or why it would be useful, so allow me to elaborate. Say you’ve got the following bit of Javascript: var ytmnd = function() { alert("you're the man now " + (arguments[0] || "dog")); }; Calling ytmnd() gets us you're the man now dog, while ytmnd("david") yields you're the man now david. Calling simply ytmnd gives us a reference to the function that we’re free to pass around and call at a later time. Consider now the following Ruby code: def ytmnd(name = "dog") puts "you're the man now #{name}" end First, aren’t default argument values and string interpolation awesome? Love you, Ruby. Just as with our Javascript function, calling ytmnd() prints “you’re the man now dog”, and ytmnd("david") also works as you’d expect. But. BUT. Running ytmnd returns not a reference to the method, but rather calls it outright, leaving you with nothing but Sean Connery’s timeless words. To duplicate Javascript’s behavior, you can convert the method to a lambda with sean = lambda(&method(:ytmnd)). Now you’ve got something you can call with sean.call or sean.call("david") and pass around with sean. BUT WAIT. Everything in Ruby is an object, even methods. And as it turns out, a method object behaves very much like a lambda. So rather than saying sean = lambda(&method(:ytmnd)), you can simply say sean = method(:ytmnd), and then call it as if it were a lambda with .call or []. Big ups to Justin for that knowledge bomb. WHOOOO CARES All contrivances aside, there are real-life instances where you’d want to take advantage of this language feature. Imagine a Rails partial that renders a list of filtered links for a given model. How would you tell the partial where to send the links? You could pass in a string and use old-school :action and :controller params or use eval (yuck). You could create the lambda the long way with something like :base_url => lambda { |*args| articles_path(*args) }, but using method(:articles_path) accomplishes the same thing with much less line noise. I’m not sure it would have ever occurred to me to do something like this before I got into Javascript. Just goes to show that if you want to get better as a Rubyist, a great place to start is with a different language entirely.
https://www.viget.com/articles/convert-ruby-method-to-lambda/
CC-MAIN-2021-43
refinedweb
426
71.75
[Fredrik Johansson] >>> I'd rather like to see a well implemented math.nthroot. 64**(1/3.0) >>> gives 3.9999999999999996, and this error could be avoided. [Steven D'Aprano] >> >>> math.exp(math.log(64)/3.0) >> 4.0 >> >> Success!!! [Tom Anderson] > Eeeeeeenteresting. I have no idea why this works. Given that math.log is > always going to be approximate for numbers which aren't rational powers of > e (which, since e is transcendental, is all rational numbers, and > therefore all python floats, isn't it?), i'd expect to get the same > roundoff errors here as with exponentiation. Is it just that the errors > are sufficiently smaller that it looks exact? Writing exp(log(x)*y) rather than x**y is in _general_ a terrible idea, but in the example it happens to avoid the most important rounding error entirely: 1./3. is less than one-third, so 64**(1./3.) is less than 64 to the one-third. Dividing by 3 instead of multiplying by 1./3. is where the advantage comes from here: >>> 1./3. # less than a third 0.33333333333333331 >>> 64**(1./3.) # also too small 3.9999999999999996 >>> exp(log(64)/3) # happens to be on the nose 4.0 If we feed the same roundoff error into the exp+log method in computing 1./3., we get a worse result than pow got: >>> exp(log(64) * (1./3.)) # worse than pow's 3.9999999999999991 None of this generalizes usefully -- these are example-driven curiousities. For example, let's try 2000 exact cubes, and count how often "the right" answer is delivered: c1 = c2 = 0 for i in range(1, 2001): p = i**3 r1 = p ** (1./3.) r2 = exp(log(p)/3) c1 += r1 == i c2 += r2 == i print c1, c2 On my box that prints 3 284 so "a wrong answer" is overwhelmingly more common either way. Fredrik is right that if you want a library routine that can guarantee to compute exact n'th roots whenever possible, it needs to be written for that purpose. ... > YES! This is something that winds me up no end; as far as i can tell, > there is no clean programmatic way to make an inf or a NaN; All Python behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent accident, mostly inherited from that all C89 behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent crapshoot. > in code i write which cares about such things, i have to start: > > inf = 1e300 ** 1e300 > nan = inf - inf That would be much more portable (== would do what you intended by accident on many more platforms) if you used multiplication instead of exponentiation in the first line. ... > And then god forbid i should actually want to test if a number is NaN, > since, bizarrely, (x == nan) is true for every x; instead, i have to > write: > > def isnan(x): > return (x == 0.0) and (x == 1.0) The result of that is a platform-dependent accident too. Python 2.4 (but not eariler than that) works hard to deliver _exactly_ the same accident as the platform C compiler delivers, and at least NaN comparisons work "as intended" (by IEEE 754) in 2.4 under gcc and MSVC 7.1 (because those C implementations treat NaN comparisons as intended by IEEE 754; note that MSVC 6.0 did not): >>> inf = 1e300 * 1e300 >>> nan == nan >>> nan = inf - inf >>> nan == 1.0 False >>> nan < 1.0 False >>> nan > 1.0 False >>> nan == nan False >>> nan < nan False >>> nan > nan False >>> nan != nan True So at the Python level you can do "x != x" to see whether x is a NaN in 2.4+(assuming that works in the C with which Python was compiled; it does under gcc and MSVC 7.1). > The IEEE spec actually says that (x == nan) should be *false* for every x, > including nan. I'm not sure if this is more or less stupid than what > python does! Python did nothing "on purpose" here before Python 2.4. > And while i'm ranting, how come these expressions aren't the same: > > 1e300 * 1e300 > 1e300 ** 2 Because all Python behavior in the presence of infinities, NaNs and signed zeroes is a platform-dependent accident. > And finally, does Guido know something about arithmetic that i don't, Probably yes, but that's not really what you meant to ask <wink>. > or is this expression: > > -1.0 ** 0.5 > > Evaluated wrongly? Read the manual for the precedence rules. -x**y groups as -(x**y). -1.0 is the correct answer. If you intended (-x)**y, then you need to insert parentheses to force that order.
https://mail.python.org/pipermail/python-list/2005-July/298805.html
CC-MAIN-2019-35
refinedweb
782
74.49
I'm using cakePHP and there is a very annoying problem. My controllers extend the AppController class which extends the Controller class. But when I want to use the Controller class's methods the code completion does not recognize them in my controllers. The funny thing is that when I use the IDE's Go To Declaration it finds the Controller class immediately. Any hint what can I do? Thanks. I'm using cakePHP and there is a very annoying problem. My controllers extend the AppController class which extends the Controller class. But when I want to use the Controller class's methods the code completion does not recognize them in my controllers. Hi Gabor, Are you talking about this: ? Hi Andrij, not exactly that is my problem. Here's my skeleton: class InvoicesController extends AppController{ function blahblah(){ $this->find() =>Code completion does not work!!! } } class AppController extends Controller{ function ... } class Controller{ function find(){ ... } function etc(){ ... } } In my InvoicesController I can't make the code completion work to use the Controller's methods. Am I right assuming, that it is a BUG and phpstorm is unable to tackle such a problem? It's a pitty because it's impossible to use this IDE to develop in cakePHP. I'm not part of JB Team, just ordinary PhpStorm user, and in my opinion (based on what I was able to figure out) it is NOT a bug. I have downloaded the latest stable CakePHP (v1.3.7) and did test the code you have provided. I cannot find find() method in any parent class (InvoicesController -> AppController -> Controller -> Object). I'm not familiar with CakePHP so I may be wrong, but I think that the "find()" functionality is exposed during runtime by injecting methods of one class (possibly Model) into another (Controller or its' descendant) trough so-called "behaviors" (maybe something similar to what Yii framework has). If that is correct then I do not see how PhpStorm can pick it up automatically. You may try and resolve this yourself by using PHPDoc @method functionality, but I'm unsure. You're partly right, because there's no find method() inside those classes, but there are others (redirect, set etc) , that are not picked up by phpstorm. I don't miss the concrete find() method I miss the complete access to the public methods of the class. If it doesn't work in phpstorm then it can be called a bug or the users should be warned that phpstorm does not support certain farmeworks only on texteditor level. Probably the file search can't go below certain level of folders and isn't able to pick up the classes there. By definition, the product does not support anything but what we explicitly say it does. Now I see which problem you are having. The problem is that there is more than 1 class named AppController in whole CakePHP framework: 2 are in tests folder (which you do not need in production environment anyway) 1 in CakePHP\cake\console\templates\skel\app_controller.php 1 in CakePHP\cake\libs\controller\app_controller.php If there are more than one class with the same name within the project PhpStorm most likely will fail with properly resolving available methods/properties (at least at the current moment). For web app I guess you only need the last one. If you can exclude the rest 3 from the project, then PhpStorm works fine (redirect() & set() and all other methods of Controller class are available for code completion -- tested myself). You can resolve this in 2 ways: 1) If you connect CakePHP via External Libraries (Settings | PHP), then you have to physically delete tests and CakePHP\cake\console folders 2) If CakePHP is part of the project (which I think is most likely to be the case), then just exclude these folders from the project in Settings | Directories. @Andriy Your help is very appreciated. Yes, thank you for the answer. This is a very simple solution to the problem but it definetely works. But it raises other questions as well. In many other cases you have more than one calss under the same name. Is there a way to configure PHPStorm to differentiate between them or for the code completion to show hints from both classes? Using namespace will bring me closer to this? I'm not using namespaces myself (still doing just fine with traditional approach), but they definitely should help here as they were designed for such (or similar) scenarios. I just do not see how this may help to solve this particular problem (unless you're having completely different framework/project in mind). Currently none. But there is a ticket on Issue Tracker regarding this issue which scheduled for 2.1 IIRC (.. or maybe 2.x) ?:|. I think it is a platform limitation which is not that easy/fast to fix as it (codebase) affects every IDE from JB, not just PhpStorm.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206376909-Code-completion-does-not-recognize-elements-of-the-parents-of-parent-class?page=1
CC-MAIN-2020-29
refinedweb
824
63.09
Summary: in this tutorial, you’ll learn how to select columns from a DataFrame object. In this example, we are going to use the data from List of metro systems on Wikipedia and parse its HTML table into a DataFrame object to serve as the main data source with the following code snippet. # Create a DataFrame from Wiki table import pandas as pd url = '' df_list = pd.read_html(url) df = df_list[0] Get column list In order to select columns from a DataFrame, first we need to know how many column it contains. The columns attribute of DataFrame returns an Index object, basically represents the list of all column names. # Get column names from DataFrame df.columns #Output Index(['City', 'Country', 'Name', 'Year opened', 'Year of last expansion', 'Stations', 'System length', 'Annual ridership(millions)'], dtype='object') Select one column To get the individual column data we can either use df.column_name. For example. # Select column using name attribute df.Country # Output 0 Algeria 1 Argentina 2 Armenia 3 Australia 4 Austria ... 188 United States 189 United States 190 United States 191 Uzbekistan 192 Venezuela Name: Country, Length: 193, dtype: object Alternatively, use df['column_name'] syntax achieves the same result. # Select column using index operator df['Country'] # Output 0 Algeria 1 Argentina 2 Armenia 3 Australia 4 Austria ... 188 United States 189 United States 190 United States 191 Uzbekistan 192 Venezuela Name: Country, Length: 193, dtype: object Select multiple columns In order to select multiple columns, we have to use the square brackets syntax, but this time, a list of column names will be passed : df[["column1", "column2"]]. # Select multiple columns col_names = ['Country', 'Name'] df[col_names] We’ll get another DataFrame as the result. Select columns by index number If you want to select a column by its index number, you have to use DataFrame’s iloc property. More details on the method will be covered more thoroughly in the future. Use df.iloc[:, n] to access the column at index number n, like the example below. # Select the first column data (index 0) df.iloc[:, 0] # Output 0 Algiers 1 Buenos Aires 2 Yerevan 3 Sydney 4 Vienna ... 188 San Francisco 189 San Juan 190 Washington, D.C. 191 Tashkent 192 Caracas Name: City, Length: 193, dtype: object Similar to how we pass a list of column names to select multiple columns, you can also pass a list of column index numbers to create a subset of a DataFrame out of multiple columns. The data returned will be wrapped in a DataFrame. # Select columns with index 0 and 3 col_index = [0,3] df.iloc[:, col_index] # or shorter df.iloc[:, [0,3]] Result : Select columns by condition Just like how we can select rows based on multiple conditions using boolean index, we can also do the same thing with columns. # Select columns by multiple conditions condition_1 = df["Country"] == "China" condition_2 = df["Year opened"] == "2012" df[condition_1 & condition_2] Please note that we did not clean up citations and footnotes from our pandas DataFrame, so any “greater than” or “lower than” operation should not be evaluated to avoid strange errors. Columns with duplicate names Let’s say you have a CSV which contains multiple columns that share the same name and want to import it into a DataFrame for easier querying. Don’t worry, Pandas read_csv method has a parameter built to handle duplicate columns. By default, mangle_dupe_cols is set to True, meaning that duplicate columns will be automatically renamed to X, X.1, X.2 and so on, once they are imported to the DataFrame. Let’s see how this works in action. from io import StringIO import pandas as pd # Two duplicate id columns txt = """success, created, id, errors, id 1, 2, 3, 4, 5, 6 7, 8, 9, 10, 11, 12""" # Create a DataFrame out of the above data df = pd.read_csv(StringIO(txt), skipinitialspace=True) df We’ll get back the DataFrame with no duplicate names. If you don’t like this way of renaming column names, you can always overwrite them with your own by setting df.columns to a new list of strings. # Set new column names df.columns = ['success', 'created', 'id_created', 'errors', 'id_error'] df The original DataFrame will be updated instantly. Summary - Select one column with either df.column_nameor df['column_name'] - Select multiple columns using index number with df.iloc[:, [id1,id2, id3]] - Select multiple columns using conditions by passing boolean variables : df[df['col1'] == 'val1' & df['col2'] >= 'val2'] - Duplicate column names are renamed automatically upon reading CSV. - Batch column name renaming can be done by setting df.columnsto a Python list.
https://monkeybeanonline.com/select-dataframe-columns/
CC-MAIN-2022-27
refinedweb
765
62.17
Announcing F# 4.6 Phillip We’re excited to announce general availability of F# 4.6 and the F# tools for Visual Studio 2019! In this post, I’ll show you how to get started, explain the F# 4.6 feature set, give you an update on the F# tools for Visual Studio, and talk about what we’re doing next. F# 4 also get an appropriate .NET Core installed by default. Once you have installed either .NET Core or Visual Studio 2019, you can use F# 4.6 with Visual Studio, Visual Studio for Mac, or Visual Studio Code with Ionide. Anonymous Records The only language change in F# 4.6 is the introduction of Anonymous Record types. Although it is a single feature, it can be used in numerous contexts, which naturally means I’ll enumerate all the (useful) use cases I can think of. Basic usage From an F#-only perspective, Anonymous Records are F# record types that don’t have explicit names and can be declared in an ad-hoc fashion. Although they are unlikely to fundamentally change how you write F# code, they do fill many smaller gaps F# programmers have encountered over time, and can be used for succinct data manipulation that was not previously possible. They’re quite easy to use. For example, here how you can interact with a function that produces an anonymous record: However, they can be used for more than just basic data containers. The following expands the previous sample to use a more type-safe printing function: If you call printCircleStats with an anonymous record that had the same underlying data types but different labels, it fails to compile: This is exactly how F# record types work, except everything has been declared ad-hoc rather than up-front. This has benefits and drawbacks depending on your particular situation, so I recommend using anonymous records judiciously rather than replacing all your up-front F# record declarations. Struct anonymous records Anonymous records can also be structs by using the struct keyword: You can call a function that takes a struct anonymous record like this: Or you can use “structness inference” to elide the struct at the call site: Structness inference will treat the instance of the anonymous record you created and passed in as if it were a struct. Note that the reverse is not true: It is not currently possible to define IsByRefLike or IsReadOnly struct anonymous record types. There is a language suggestion that proposes this enhancement, but due to oddities in syntax it is still under discussion. Taking things further Anonymous records are great for adding a bit more readability to ephemeral data, but they can also be used in a broader set of more advanced contexts. I’ll go through some of them here. Anonymous records are serializable You can serialize and deserialize anonymous records: Here’s a sample library that is also called in another project: This may make things easier for scenarios like lightweight data going over a network in a system made up of microservices. Anonymous records can be combined with other type definitions You may have a tree-like data model in your domain, such as the following example: It is typical to see cases modeled as tuples with named union fields, but as data gets more complicated, you may extract each case with records: This recursive definition can now be shortened with anonymous records if it suits your codebase: As with previous examples, this technique should be applied judiciously and when applicable to your scenario. Anonymous records ease the use of LINQ in F# F# programmers typically prefer using the List, Array, and Sequence combinators when working with data, but it can sometimes be helpful to use LINQ. This has traditionally been a bit painful, since LINQ makes use of C# anonymous types. With anonymous records, you can use LINQ methods just as you would with C# and anonymous types: Anonymous records ease working with Entity Framework and other ORMs F# programmers using F# query expressions to interact with a database should see some minor quality of life improvements with anonymous records. For example, you may be used to using tuples to group data with a select clause: But this results in columns with names like Item1 and Item2 that are not ideal. Prior to anonymous records, you would need to declare a record type and use that. Now you don’t need to do that: No need to specify the record type up front! This makes query expressions much more aligned with the actual SQL that they model. Anonymous records also let you avoid having to create AnonymousObject types in more advanced queries just to create an ad-hoc grouping of data for the purposes of the query. Anonymous records ease the use of custom routing in ASP.NET Core You may be using ASP.NET Core with F# already, but may have run into an awkwardness when defining custom routes. As with previous examples, this could still be done by defining a record type up front, but this has often been seen as unnecessary by F# developers. Now you can do it inline: It’s still not ideal due to the fact that F# is strict about return types (unlike C#, where you need not explicitly ignore things that return a value). However, this does let you remove previously-defined record definitions that served no purpose other than to allow you to send data into the ASP.NET middleware pipeline. Copy and update expressions with anonymous records As with Record types, you can use copy-and-update syntax with anonymous records: However, copy-and-update expressions do not restrict the resulting anonymous record to be the same type: The original expression can also be a record type: You can also copy data to and from reference and struct anonymous records: The use of copy-and-update expressions gives anonymous records a high degree of flexibility when working with data in F#. Equality and pattern matching Anonymous records are structurally equatable and comparable: However, the types being compared must have the same “shape”: Although you can equate and compare anonymous records, you cannot pattern match over them. This is for three reasons: - A pattern must account for every field of an anonymous record, unlike record types. This is because anonymous records do not support structural subtyping – they are nominal types. - There is no ability to have additional patterns in a pattern match expression, as each distinct pattern would imply a different anonymous record type. - The requirement to account for every field in an anonymous record would make a pattern more verbose than the use of “dot” notation. Instead, “dot”-syntax is used to extract values from an anonymous record. This will always be at most as verbose as if pattern matching were used, and in practice is likely to be less verbose due to not always extracting every value from an anonymous record. Here’s how to work with a previous example where anonymous records are a part of a discriminated union: There is currently an open suggestion to allow pattern matching on anonymous records in the limited contexts that they could actually be enabled. If you have a proposed use case, please use that issue to discuss it! FSharp.Core additions It wouldn’t be another F# release without additions to the F# Core Library! ValueOption expansion The ValueOption type introduces in F# 4.5 now has a few more goodies attached to the type: - The DebuggerDisplay attribute to help with debugging - IsNone, IsSome, None, Some, op_Implicit, and ToString members This gives it “parity” with the Option type. Additionally, there is now a ValueOption module containing the same functions the the Option module has: This should alleviate concerns that ValueOption is the weird sibling of Option that doesn’t get the same set of functionality. tryExactlyOne for List, Array, and Seq This fine function was contributed by Grzegorz Dziadkiewicz. Here’s how it works: F# tools for Visual Studio 2019 In addition to releasing F# 4.6, we’ve done a lot of work in the F# tools for Visual Studio, especially around performance for larger solutions. Before going into some details, I’ll reiterate how you may want think about the F# tools for Visual Studio. You may have noticed that the last release of Visual Studio 2017 is called “update 15.9” or “15.9”. Visual Studio 2017 started at update 15.0 when it first released. Since then, we delivered 8 official updates to F# tooling, including the F# 4.5 release in the 15.8 update. This update cadence was somewhat unexpected for people who were used to the slow update cadence of past Visual Studio versions. Visual Studio 2019 will continue the frequent, incremental updates for the F# tools. You might notice is that VS 2019 is also called “version 16.0” or “16.0”, especially if you follow F# development on GitHub. Although Visual Studio 2019 is a new version of Visual Studio, the F# tools are an incremental improvement over update 15.9 with a focus on performance. With this in mind, you can think of the Visual Studio 2019 release and future updates as a continuous evolution of F# tooling. We’re incredibly excited to continue shipping frequent updates to F# and F# tools. Performance updates Our primary area of focus for this release has been performance for medium-to-large sized solutions. Historically, the F# compiler and tools have struggled with larger solutions, leading to a lot of memory and CPU usage, and sometimes forcing people to work around the tooling behavior rather than apply their preferred approach to managing source code. To address this, we started our work by analyzing memory usage, and identified multiple patterns where the F# compiler was rather gluttonous with Large Object Heap (LOH) allocations. One such case we found was a workaround to a bug in 2008 where the IDE could not provide IntelliSense on the very last line of a source file. That workaround resulted in horrible performance characteristics by forcing the re-allocation of the string representing an active source file for every operation in the IDE! To address the performance problem, we removed all allocations of this nature and changed the F# parser so that it could provide enough context to the IntelliSense engine such that IntelliSense at the bottom of a source file would work properly.. This performance work was done in collaboration with some of our wonderful OSS contributors: Avi Avni, Chet Husk, Steffen Forkmann, and Eugene Auduchinok. We’re extremely grateful to have such wonderful people who are excited to help improve the F# experience! F# users with medium to large solutions (50+ projects) should notice things running smoother, especially over long stretches of work. There’s still a lot to go, especially for solutions using Type Providers extensively. We’re already focusing on addressing some issues there, so you can expect future updates of the F# tools to perform even better over time. New features It wouldn’t be a new release without some new features! Saul Rennison contributed a feature that intelligently indents pasted code based on where your cursor is. If you turn on Smart Indent via Tools > Options > Text Editor > F# > Tabs > Smart this will be on automatically. We’ve also made a few small changes to IntelliSense to help clean up the experience a bit: - Backspace after an identifier has been fully typed out will no longer suggest seemingly unrelated items - The primary constructor after an inherit clause now shows in completion, contributed by Eugene Auduchinok - Symbols from unopened namespaces will not be shown by default anymore (you can turn it back on in the F# editor settings) And as you would expect, Anonymous Records have their labels show up in all F# tooling (Go to Definition, Rename, Find All References, etc.) What we’re working on next As previously mentioned, we’re continuing our foray into solving deep-rooted performance issues in F# tooling. Although we cannot guarantee that every update in F# tooling comes with performance improvements, it is an ambient priority that we will continually improve throughout the Visual Studio 2019 update timeline. We also have some big milestones coming up this year: - F# 5.0 and .NET Core 3.0 - F# Interactive on .NET Core - F# for Machine Learning Work on F# 5.0 is already underway, with multiple new features under active development and others planned. Our goals are to align F# 5.0 with .NET Core 3.0, but this will ultimately be a quality call. If certain features are important for you to be in F# 5.0, we encourage you to engage with us either on the F# Language Suggestions or F# Design repositories. F# Interactive (FSI) is also a top priority for us to align with .NET Core 3.0. You can already try out FSI on .NET Core 3.0 Preview today, and you can expect further improvements to ship in subsequent previews. Additionally, we’re looking to finish up work to allow you to reference packages directly in F# scripts on both .NET Core and .NET Framework. This should fundamentally change how F# scripting is done and greatly simplify existing workflows people have today. Finally, we’re also devoting significant time in developing a compelling offering for using F# to do machine learning. In addition to being supported on ML.NET, we’re working towards a world-class experience when using F# and Tensorflow. Tensorflow shape checking and shape inference tie quite nicely into the F# type system and tools, which we feel is a differentiator when compared to using Python, Swift, or Scala. This is still an active research area, but over time we expect the experience to become quite solid as F# Interactive experiences on .NET Core also shape up. Wrapping up Although the total list of new features in F# 4.6 isn’t enormous, there’s a lot to this release! Additionally, our focus on performance should result in a cleaner experience when using F# in Visual Studio 2019. As we continue to make strides in this area with future updates, you can expect things to get better and better. As always, thank you to the F# community for their contributions, both in code and design discussion, that help us continue to advance the F# language and tools. Cheers, and happy hacking!
https://devblogs.microsoft.com/dotnet/announcing-f-4-6/
CC-MAIN-2019-18
refinedweb
2,414
60.35
I'm new to Java and have been working on an assignment for days w/ no luck. I'm happy to post my program if needed BUT for now just looking for a hint(s) and clarification. I am trying to write a 'simple' applet that will use a one dimensional array w/ 5 elements. Basically I want my user to enter a number in the text field, I then want to validate the number- must be between 10-100, I want to validate the entriescounter which must be less than 5, then (finally) I want to search my array to see if the users number was previously entered, if not I want to append that number into the array (how?) if the number is alrady stored I want to increment my counter but ignore the input. I want the unique user input to dispay in another jtextfield and I want my error responses to display in the showStatus box- ie, invalid entry after the valid test and over 5 entries made- if entries counter reaches 5. I am new to this site as well- so if I have asked too much I'm sorry in advance. Thanks- any help on arrays and appending input from a user is most appreaciated. ok i understand your question, but where are you at? Do you have any problems with any specific areas? Have you done much with GUI stuff before? A kram a day keeps the doctor......guessing Kram thanks for the reply - I'm pretty lost at this point. I sorta get the basic GUI elements. My program logic is lacking- I don't have my validation set up in the right place(s) I guess and my error responses are suppose to display next to my counter in the showStatus area- yet I couldn't get that right so I put them in another JText field - ug! I seriously don't get how the appending/storing user input to my array element works. I keep thinking about applications and using strings to parse Int the input- I don't know what the equiv is in an applet. (does any of that even make sense?) import java.awt.*; import java.awt.event.*; import javax.swing.*; public class A7_13 extends JApplet implements ActionListener { JLabel enterLabel, resultLabel; JTextField enterField, resultField; int array[]; // set up applet's GUI public void init() { // get content pane and set its layout to FlowLayout Container container = getContentPane(); container.setLayout( new FlowLayout() ); // set up JLabel and JTextField for user input enterLabel = new JLabel( "Enter number:" ); container.add( enterLabel ); enterField = new JTextField( 4 ); container.add( enterField ); // register this applet as enterField's action listener enterField.addActionListener( this ); // set up JLabel and JTextField for displaying results resultLabel = new JLabel( "Numbers entered:" ); container.add( resultLabel ); resultField = new JTextField( 10 ); resultField.setEditable( false ); container.add( resultField ); // I added this thinking that I need it later // to validate the numbers entered it is useless here int value = 0; int number = 0; // create array that will be populated by users input array = new int [ 5 ]; for ( int counter = 0; counter < array.length; counter++ ) array[ counter++ ] = value; // I think the above is the right format // I don't have the variable value set up //I want that to somehow append the input to //the array elements- validate, search, display if ( number < 10 || number > 100 ) showStatus( "Invalid Entry!" ); return; // i think this is right logic but //do I need more if statements for my counter // like if counter < 5 //return showstatus "over 5 entries" } // end method init // search array for specified key value public int linearSearch( int array2[], int key ) { // loop through array elements for ( int counter = 0; counter < array2.length; counter++ ) // if array element equals key value, return location if ( array2[ counter ] == key ) return counter; return -1; // key not found // this is suppose to be my search piece } // end method linearSearch // obtain user input and call method linearSearch public void actionPerformed( ActionEvent actionEvent ) { // input also can be obtained with enterField.getText() String searchKey = actionEvent.getActionCommand(); // pass array reference to linearSearch; normally, a reference to an // array is passed to a method to search corresponding array object int element = linearSearch( array, Integer.parseInt( searchKey ) ); // this is my search should I append here // display search result if ( element != -1 ) resultField.setText( "number is " + element ); else showStatus( "Invalid entry" ); showStatus( "Numbers entered " + value ); enterField.setText( "" ); //clear field } // end method actionPerformed } // end class A7_13 I think I figured out part of my problem- my if validation should not be in the INIT- I am going to try moving it into the actionPerformed. yeah thats a good place to do it, as you will probably have figured out, when ever a button or field get activated by the user the actionPerformed() method get fired with an ActionEvent passed to it, this even can be used to access what exactly has been pushed or edited. Then once you know that the user has pushed the "Go" button, you can code in the parts that gather the text from the fields and put them into variables for validation Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?140662-How-to-store-user-input-in-an-array-element&p=416577&mode=linear
CC-MAIN-2014-10
refinedweb
863
56.59
G'day, I just thought that I would make a little project announcement about something I've been tinkering with for the past week or two. NuFox is a server-side XUL toolkit written on top of nevow. It uses a customised stan dialect with xml namespace support as well as livepage (nevow svn head version), and then abstracts that all away into a more python-gui-toolit-like API. Its very early in development but its already pretty powerful in terms of what you can do with it, check it out and play with some of the examples. Feedback is most welcome. -tjs
http://twistedmatrix.com/pipermail/twisted-web/2005-July/001782.html
CC-MAIN-2014-42
refinedweb
104
76.96
How to code Java (Basics) ( 300 Cycle Special! ) Java Tutorial (Basics) Understanding and coding Java Hey guys! In this tutorial, I'll be going over most of the basics of Java and most of what I have learned. At the end there is some extra stuff that’s not in the basics, but useful stuff that I was able to learn from Stack Overflow. I know this is really long, but hang in there. Feel free to give me feedback in the comments! Class You cannot write any program in Java without having a class. In Repl.it when you make a new repl in Java, your class is already printed there for you. It looks something like this: public class Main {}. It will look different when you start up a new Repl because there will be code inside the curly braces. Also, note that the word class, isn’t capitalized. It is very important that you keep this in mind, otherwise your program will return an error. Curly Braces One of the most important parts of Java that allow your code to run is curly braces. Curly braces are the outline of the code you are running; everything inside of them belongs to the function, class, or method you are running. Indentation Indentation is one of the main things you need to check over to make sure that your code works the way you want it to. For example, when you open up a new repl, notice the lines that appear. All the code that is to the right of the first gray line belongs to the main class because of the curly braces. Everything right of the second gray line still belongs to the main class, but first does any commands that belong to the public static void main curly braces. Main Method You'll notice that on the next line of code, it says public static void main(String[] args) {}. This is basically the main method showing that you can access the code file anywhere. The most important part that you need to remember when typing this is that after this part [], there needs to be a space. Then you can type in args. It is also very important to not capitalize any of the words in that line except for the word String. Note: There is a way to make it private, but I’m not sure about how to do it exactly and if it would work on Repl.it Comments do not affect your code at all. They are just there so you can annotate your code without making any real changes to it. To type in single-line comments, simply do this: //This is a comment To type in multiple line comments, do this: /*This is also a comment */ Print Statements To print something in Java, you can just type in System.out.println("Hello world!"); This tells the system to output whatever is inside the parenthesis. Inside the parenthesis, it is very important to use double quotes, unless you're typing in a variable, which we'll get to later. Terminal In case you are new to coding and are not familiar with terms, the place where your code is outputted is called the Terminal. Semicolons Semicolons are also one of the most important parts of Java. You need to place a semicolon after every function or statement that you use otherwise your code won't run. Comments don't count. Variable Types Functions are pieces of data that are called and can store values. Some data types are: - int (stores integers) - double (stores integers with decimal values) - String (stores text) - boolean (true or false statements) How to make variables: To make variables, you need to name them. The way to name a variable is usually for the purpose that it is created for. You can have numbers in the variable name, but they can’t be the first character in the name. If you would like to have more than one word in the name of your variable, the first word is all lowercase. If your variable name isn't more than two words, then it should be in all lowercase. Here are some examples: class Main { public static void main(String[] args) { System.out.println("Hello world!"); //This is a comment String myName = "Chicken Alfredo"; String hello = "Hello, "; String world = "World! "; int one1 = 1; double two2 = 2; boolean playAgain = true; } } When you set the value of each variable, make sure you use the correct type of value, otherwise, you will get an error. For example, if you try to type words into a variable classified as an integer ( int slices = "8 slices of pizza."), it would return an error, because you can't store text in a number value. #### Once you declare variables the first time, you don’t need to make them again. Just type in the name of the variable, but not the type, whenever you want to do something with it later on in the code. Arithmetic Operators ( Math Operators ) Inside the variable you create, you can do anything that involves math with them. If you want to add numbers or something, use the + between them. To multiply use this: *. To divide use this (only one): /, and to subtract, do this: -. A special operator, which is called the modulus, looks like this: %. The modulus gives the remainder of something that you divide. For example, if I coded int myNumber = 5 % 2; and printed it, I would see the number 1, because that is the remainder. The ++ operator increases the value of a variable by 1, and the -- decreases the value of a variable by 1. If you use the += or -= operators, you can add or subtract any value you want from the variable. For example: int myAge=15; myAge++1; myAge--1; myAge+=4; myAge-=4; Concatenation If you want to print several things together on the same print line, just add a + between each part. But if you're doing strings and regular variables, you need to place the + after the quotation marks. All examples: class Main { public static void main(String[] args) { System.out.println("Hello world!"); //This is a comment String myName = "Chicken Alfredo"; String hello = "Hello, "; String world = "World! "; int one = 1; double two = 2; boolean playAgain = true; System.out.println(hello + " world! My name is " + myName + ". " + one + two); } } This would output: Hello World! Hello, world! My name is Chicken Alfredo. 12 Creating Scanners If you would like to take in user input, you need to import a Scanner. This is easy to do, but you must place all imports before the class. After you import the Scanner, you need to create one. Here is how to do it: import java.util.Scanner; //Yes, you need a semicolon at the end of all imports class Main { public static void main(String[] args) { Scanner key = new Scanner(System.in); //You can name your scanner whatever you want (keyboard, input, etc.) } } Taking in User Input To take in user input after asking a question, there are several ways depending on the type of variable that is being entered. I will print out an example of what the code will look like, then break it down: Import java.util.Scanner; class Main{ public static void main(String[] args { Scanner key=new Scanner(System.in); System.out.println(“What is your name? ”); String yourName = key.nextLine(); } } The key.nextLine() tells the program that the next line (until the user presses Enter), is what will be stored in the variable yourName. Naturally, for each type of variable, you will have different ways to read in the user input. For example if you asked how old the user was, you would have to do: int age = key.nextInt(). Or if you asked them how much money they have: double money = key.nextDouble(). Logical Operators There are 3 logical operators in Java. The ‘and’ operator, the ‘or’ operator, and the ‘not’ operator. The ‘and’ operator is represented by this: &&; This is the ‘or’ operator: ||. And this is the ‘not’ operator: !. These logical operators are mostly used in conditional statements (next section), so advanced Java coders don’t always use logical operators in conditional statements. Conditional Statements ( If / Else if / else ) If and else if statements are similar to booleans, except that they are conditional statements, and they will only run the code inside of them if the condition is true. In a typical if statement, you will see something like this: if (userGuess==secretNumber). Inside the parenthesis is the condition. When you are comparing two values to see if the condition is true or false, you need to use two equal signs if it’s a number. For strings, there are two ways to compare the strings. You can do this if (yourName.equalsIgnoreCase(“Jeff”)). The .equalsIgnoreCase() is the ultimate way to make sure if the user’s answer (only text), is the same, or not the same, no matter the capitalization. Inside the parentheses at the end of the .equalsIgnoreCase()<-, you need to enter text in quotation marks or the name of a String variable. When you are trying to make the condition to do something as long as the number/string does not equal another number/string, then you need to use the ‘not’ operator. The placement of the exclamation mark is also important. If you are dealing with numbers, you need to put the exclamation mark after the variable name but before the equal sign: if (money!=0). If you are doing it with strings, you need to do the same thing with the exclamation mark as you did with the numbers, but you also need to remember to put the quotation marks if you are typing in text. You don’t need to do that if you are putting in the name of a variable: if (bank!=“none”). If you use the .equalsIgnoreString() way, then here’s another example: if (!bank.equalsIgnoreCase(“none”)). When you use else if statements, you need to put them after if statements, and only because the if statement is false but you would like to add another condition. When you use the else statement, you only put that if all the conditional statements you put are going to be false. Let me give you an example: if (age<=17 && age>=0) System.out.println(“You are underage.”); else if (age>=18) System.out.println(“You are overage.”); else System.out.println(“You are a fetus.”); (I know the example is really bad) Another important thing to keep in mind while doing conditional statements is that if the code that belongs to them is more than one line long, you need to add curly braces, and everything indented inside the curly braces is what happens if the conditional statement is true/false. For example: if (age<=17 && age>=17) { System.out.println(“You are underage.”); System.out.println(“It is truly amazing that you have stuck with this tutorial because of how long it is”); } else { System.out.println(“You are overage”); System.out.println(“I am glad you are trying to learn Java. Since you have to read a lot in your classes, I don’t think this will tutorial will be as bad for you (but still pretty bad) ”); } If you would like to set two or more conditions, then you would need to use the logical operators. For Loops There are three parts to a for loop (four if you are counting the parenthesis). An example of a for loop would look like this: for (int a=0; a<=10; a++). In the first part of the for loop (int a=0;), we are declaring the variable. It must be an integer. In the next part, it says that as long as a is less than or equal to 10, the for loop will run. The last part increases the value of a by one every time the code in the for loop is run. In this section, you can set the condition of the variable to be greater or less than or just equal to a value. In the next line of code, put in curly braces, and put your code after that (Don’t forget to indent). Once again, this code will run as long as the second part of the condition is true. While Loops A while loop looks the same as the if / elif / else statements, with the only differences being that the word is while, and that it will keep running until the condition is met. The most common example of a while loop is this: while (playAgain==true) {}. This would mean all the code put in the curly braces of the while loop would keep running as long as the boolean playAgain is false. You can put multiple conditional statements inside a while loop, and you can even put conditional statements inside other conditional statements. Here is an example of a while loop (obviously not going to display all the code here): boolean playAgain=true; while (playAgain==true) { System.out.println(“Would you like to play again (y/n) ?”); String choice=key.nextLine(); if (choice.equalsIgnoreCase(“y”)) playAgain=true; else playAgain=false; } Extras: Generating randoms In order to generate random numbers, you need to first declare the name and the type of variable you are using (ex: int secretNumber =). After that you enter the type of variable in parenthesis, so it would start to look like this: int secretNumber = (int). Now you are ready for the last part: int secretNumber = (int)(Math.random() * 50 + 1); Math.random is helping to create something random, and the int I put in parenthesis earlier tells it what type of random variable to make. The 50 tells the program the maximum value the random int can be, while the one shows the lowest value the random int can be. You can generate random doubles or integers, but as far as I know, that’s it. You can’t generate Strings, and definitely not booleans. Clearing the terminal If it’s possible for you to memorize this code, then great! I don’t really understand how the arrangement of the characters work. I recommend creating a repl to clear the terminal and place this code there, so you don’t need to keep coming back here or to another website. Here is the code to clear the terminal: System.out.print("\033[H\033[2J"); System.out.flush(); Changing the color of text you output So this one is actually pretty special because you have to play around with the numbers (31m to 33m, etc.) in order to change the color of the text, make it bold or underlined, etc. Make sure before each color you have the \033[31m part. Here are some different ways to print unique colors, and 2 text effects: System.out.println("\033[31;1mHello\033[0m, \033[32;1;2mworld!\033[0m"); System.out.println("\033[31mRed\033[32m, Green\033[33m, Yellow\033[34m, Blue\033[0m"); Congratulations on reading to the end of this tutorial (maybe)! I hope this tutorial helped you to learn Java and be able to understand it better. Please give me feedback in the comments, and if you have any questions, please leave them in the comments as well. I hope you enjoyed (and stayed awake lol) and don’t forget to stay safe! :) Also, for the repl below, yes you can enter a negative number for your age if you want :) also, it is a nice tutorial, I don't code in java, but it seems to have covered most things Wow very long! 😅 Yep, thanks! :) @Bookie0 :) @studentAlfredAl
https://replit.com/talk/learn/How-to-code-Java-Basics-300-Cycle-Special/39727?order=new
CC-MAIN-2021-17
refinedweb
2,623
71.85
Augmented reality has is showing up everywhere these days. Apple’s release of ARKit in iOS 11 and Google’s ARCore APIs are guaranteed to accelerate this trend by making augmented reality development accessible to even more developers and users. We’re especially excited about the combination of augmented reality and real-time communications. In this post we’ll dive into a simple example on how to combine Programmable Video and iOS’s ARKit to share an AR experience with another user. In upcoming posts, we’ll show how to add interactivity to your AR communications apps. What You Need All anyone needs to develop with ARKit is - an Apple device with an A9 or later processor (iPhone 6s or later, iPhone SE, any iPad Pro, or the 2017 iPad) - iOS 11 - XCode 9 (both of which you can download here.) We’ll also be using Twilio Programmable Video so you’ll need an account here if you don’t have one already and SceneKit, which is for 3D content. On the other hand, some developers use SpriteKit, which is Apple’s framework for 2D content. We’ll also need an Apple developer account as AR apps cannot be tested on the iOS simulator. The yellow points around the ship above are feature points, more of which appear when the lighting is good and when you move around a bit(not too quickly, however). That lets ARKit better detect features, like orientation and position, of physical objects. Set up your iOS Developer Environment for ARKit First, let’s make a new XCode project and select the Augmented Reality App template. Next, make sure the language is set to Swift and that the Content Technology is set to SceneKit. At the moment, we can not test AR apps on the iOS simulator, so we need to sign our app with a developer account. Make sure your developer account is added to Xcode and and choose your team to sign the app. Once you save your project, go to Preferences->Locations to make sure your Command Line Tools version is set to Xcode 9.0. Let’s also make sure that we set our deployment target to iOS 11.0 in your project-level Build Settings. Run the project, and the app will ask for permissions to use the camera, and then a model of a ship will be added to the coordinates (0, 0, 0) of our scene. To get started with Twilio Video for Swift, you can download it manually or use CocoaPods. On the command line in your current directory, type pod init. Then replace what is in Podfile with the following code. # Uncomment the next line to define a global platform for your project # platform :ios, '9.0' target 'ARSceneKitTwilioVideo' do # Comment the next line if you're not using Swift and don't want to use dynamic frameworks use_frameworks! # Pods for ARSceneKitTwilioVideo pod 'TwilioVideo', '~> 1.3' target 'ARSceneKitTwilioVideoUITests' do inherit! :search_paths # Pods for testing end end Close Xcode and on the command line in our current directory run pod repo update followed by pod install. Then open the project’s .xcworkspace. To find out more, check out the Twilio Video Quickstart for Swift. Swift Code for ARKit with Twilio Video At the top of ViewController.swift, let’s import ARKit and SceneKit, and a few other libraries we’ll need. import UIKit import SceneKit import ARKit import TwilioVideo Below that, let’s declare the Twilio Video properties we’ll need. var room: TVIRoom? weak var consumer: TVIVideoCaptureConsumer? var frame: TVIVideoFrame? var displayLink: CADisplayLink? var supportedFormats = [TVIVideoFormat]() var videoTrack: TVILocalVideoTrack? var audioTrack: TVILocalAudioTrack? The consumer variable consumes frames and status events from the camera. Supported formats include dimensions, frameRate, or pixelFormat. To quote the Programmable Video Getting Started page, “A Room represents a real-time audio, video, and/or screen-share session, and is the basic building block for a Programmable Video application.” Similarly, “Tracks represent the individual audio and video media streams that are shared with a Room.” We’ll initialize our instance methods in viewDidLoad() in ViewController.swift like so. We’ll replace TWILIO-ACCESS-TOKEN with a real access token later. That function should now look like this: override func viewDidLoad() { super.viewDidLoad() // Set the view's delegate sceneView.delegate = self // Show statistics such as fps and timing information sceneView.showsStatistics = true self.sceneView.preferredFramesPerSecond = 30 self.sceneView.contentScaleFactor = 1 // Create a new scene let scene = SCNScene(named: "art.scnassets/ship.scn")! // Set the scene to the view self.sceneView.scene = scene self.supportedFormats = [TVIVideoFormat()] sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints, ARSCNDebugOptions.showWorldOrigin] //show feature points self.videoTrack = TVILocalVideoTrack.init(capturer: self) self.audioTrack = TVILocalAudioTrack.init() let token = TWILIO-ACCESS-TOKEN let options = TVIConnectOptions.init(token: token, block: {(builder: TVIConnectOptionsBuilder) -> Void in builder.videoTracks = [self.videoTrack!] builder.roomName = "Arkit" }) self.room = TwilioVideo.connect(with: options, delegate: self as? TVIRoomDelegate) } In line 6 above we create a new scene with SceneKit and load an image called ship. The formats we want to support that are included with TVIVideoFormat() in line 7 are dimensions (like the size of the video content), frame rate, and pixel format. What we set with sceneView.debugOptions in line 8 displays a coordinate axis visualization showing the position and orientation of the AR world coordinate system. As shown below, red represents the x-axis, green represents the y-axis, and blue represents the z-axis. On lines 10 and 11 we begin capturing audio data and video in our iOS app by creating a TVILocalVideoTrack with an associated TVILocalVideoCapturer and a TVILocalAudioTrack. You can also customize some configurations when you connect to a room with TVIConnectOptions as we do on lines 13-17. We set videoTracks on line 15 so media that was already previously made can be shared with other participants when they connect to the room. Then on line 16 also set the room name that we are joining. These options are implemented when we connect to our room. Next, create a new function called startCapture that begins when another person joins the room “Arkit.” func startCapture(format: TVIVideoFormat, consumer: TVIVideoCaptureConsumer) { self.consumer = consumer self.displayLink = CADisplayLink(target: self, selector: #selector(self.displayLinkDidFire)) self.displayLink?.preferredFramesPerSecond = self.sceneView.preferredFramesPerSecond displayLink?.add(to: RunLoop.main, forMode: RunLoopMode.commonModes) consumer.captureDidStart(true) } The above function allows our app to synchronize the drawing to the refresh rate of the display. The preferred frames per second controls the number of times the function displayLinkDidFire is called per second. Let’s make that now. @objc func displayLinkDidFire() { let myImage = self.sceneView.snapshot let imageRef = myImage().cgImage! let pixelBuffer = self.pixelBufferFromCGImage(fromCGImage: imageRef) self.frame = TVIVideoFrame(timestamp: Int64((displayLink?.timestamp)! * 1000000), buffer: pixelBuffer, orientation: TVIVideoOrientation.up) self.consumer?.consumeCapturedFrame(self.frame!) } Above, TVIVideoFrame represents a video frame that has been captured or decoded, rendering our video content as a CoreVideo buffer. The timestamp is either the microseconds at which the frame was captured or when it should be rendered, and the pixelBuffer is a CVImageBuffer containing the image data for the frame. Let’s make that function, pixelBufferFromCGImage to create an RGB format CVImageBuffer from the captured CVImageBuffer now. func pixelBufferFromCGImage(fromCGImage image: CGImage) -> CVPixelBuffer { let frameSize = CGSize(width: image.width, height: image.height) let options: [AnyHashable: Any]? = [kCVPixelBufferCGImageCompatibilityKey: false, kCVPixelBufferCGBitmapContextCompatibilityKey: false] var pixelBuffer: CVPixelBuffer? = nil let status: CVReturn? = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32ARGB, (options! as CFDictionary), &pixelBuffer) if status != kCVReturnSuccess { return NSNull.self as! CVPixelBuffer } CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) let data = CVPixelBufferGetBaseAddress(pixelBuffer!) let rgbColorSpace: CGColorSpace? = CGColorSpaceCreateDeviceRGB() let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace!, bitmapInfo: (CGImageAlphaInfo.noneSkipLast.rawValue)) context?.draw(image, in: CGRect(x:0, y:0, width: image.width, height: image.height)) CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) return pixelBuffer! } On line 3 we make an image buffer to hold pixels in main memory with the options on line 2 which sets booleans indicating if the pixel buffer is compatible with CGImage types or Core Graphics bitmap contexts. CVPixelBufferCreate on line 4 makes a single buffer for a given size and pixel format with data specified by a memory location. We lock that location on line 10 before accessing pixel data with the CPU on line 11 and use it on line 13 to make a CGContext, or 2D drawing destination needed to render our colored image. We can customize it based on our device. Calling draw on the context on line 14 draws the image wherever we specify in the parameters. When we’re done rendering the image we then need to unlock that memory location as on line 15. AR Sessions In our viewWillAppear function, we need to create an AR Session configuration and run it. This allows us to track motion in our video and also to place our virtual content on real-world surfaces. override func viewWillAppear(_animated: Bool) { super.viewWillAppear(animated) let configuration = ARWorldTrackingConfiguration() sceneView.session.run(configuration) } In viewWillDisappear, we want to pause the session. override func viewWillDisappear(_animated: Bool) { super.viewWillDisappear(animated) sceneView.session.pause() } Twilio Video Access Tokens Access tokens are short-lived tokens used to authenticate Twilio Client SDKs like Video or Chat. They’re created on the server to verify a client’s identify and grant access to client API features. Testing the App To test the app, let’s clone this Video Quickstart repo and follow the directions in the README to provide the receiving end of a Twilio Video call for our demo. Use the Twilio Video console to generate an Access Token for the test app. Take note of what you put under Room Name: in this post, we use ‘Arkit’. After you click Generate Access Token, copy that token and go back into ViewController.swift and replace TWILIO-ACCESS-TOKEN with it. Copy and paste that token into the accessToken variable in VideoQuickStart’s View Controller and run the app. Connect to the ‘Arkit’ room. If you find yourself getting booted from the room, it is most likely because you didn’t use a different identity for the simulator or that your token expired. You can either keep generating a new one and replacing it in your code, or spin up a server like this. Finally we can run the ARKit application and test that the video app in the simulator can see the ARKit demo. Testing on a physical iOS device is preferable because of performance issues from testing on the simulator. Conclusion Yes, it really is that simple to get started developing augmented reality applications. For the complete code, check out the GitHub repo. A huge thank you goes to Iñaqui Delgado, Chris Eagleston, and Piyush Tank of the Video team for their help with this post. AR complemented with Twilio Video is an amazing way to create new ways to communicate. Wouldn’t it be cool to run an immersive video app while playing Minecraft? Maybe that could be your next project—you can find me online at the links below, I can’t wait to see what you build. - Twitter: @lizziepika - GitHub: elizabethsiegle
https://www.twilio.com/blog/2017/10/ios-arkit-swift-twilio-programmable-video.html
CC-MAIN-2019-04
refinedweb
1,852
50.33
Dear all, maybe this should go to the Enthought list, but as the failure is directly related to the pylab switch of ipython, I thought I try it here first: On OSX I have trouble with using the pylab switch for ipython after I copied the gdal.pth into the Enthought site-packages folder (to be able to use my KyngChaos GDAL Frameworks inside the Enthought Python). The gdal.pth does the following to the sys.path: import sys; sys.path.insert(0,'/Library/Frameworks/GDAL.framework/Versions/1.7/Python/site-packages') and in that folder there is: -rw-rw-r-- 1 root admin 128B 8 Feb 20:52 gdal.py -rw-r--r-- 1 root admin 274B 3 Mar 23:20 gdal.pyc -rw-rw-r-- 1 root admin 143B 8 Feb 20:52 gdalconst.py -rw-r--r-- 1 root admin 304B 3 Mar 23:20 gdalconst.pyc -rw-rw-r-- 1 root admin 147B 8 Feb 20:52 gdalnumeric.py -rw-r--r-- 1 root admin 309B 3 Mar 23:20 gdalnumeric.pyc drwxrwxr-x 42 root admin 1.4K 3 Mar 23:20 numpy -rw-rw-r-- 1 root admin 125B 8 Feb 20:52 ogr.py -rw-r--r-- 1 root admin 286B 3 Mar 23:20 ogr.pyc drwxrwxr-x 21 root admin 714B 3 Mar 23:20 osgeo -rw-rw-r-- 1 root admin 125B 8 Feb 20:52 osr.py -rw-r--r-- 1 root admin 286B 3 Mar 23:20 osr.pyc Maybe the double import of a potentially different numpy compared to the Enthought numpy creates the Bus Error? Not so much a double import. Only one version ever gets imported, but the GDAL Python bindings expect its version and matplotlib expects another version. If so, how can I avoid it? You would have to rebuild the GDAL Python bindings against Enthought's numpy. But why does everything work fine, when I start an Enthought ipython withOUT the -pylab switch? Importing 'from osgeo import gdal' and using it works fine in this case (Tried ReadAsArray from a gdal dataset and imshow'ed it without problems, apart from that I had to call show() because of the lack of the -pylab switch, but other than that, fine). Sorry, seems that I had messed up my config somehow, now GDAL does not work anymore inside the EPD Python. Hmm, have to make some clean tests... PS.: Sorry for the mail-list noob question, but how can I nicely reply to your answer like you replied to my question, with 'Robert Kern wrote' and so on? There's no reply possible on sourceforge and the digest contains obviously many emails, so how do you do this? Trying Unison via the GMane NNTP now, but weird that nabble has your last answer already for long time, whereas GMane still does not show it. Does the NNTP pull the mailing lists on a low frequency. Man, I think it's 10 years ago or so since I have used NNTP. Used it a lot in my first net years, 94/95 Thanks for the GMane tip, the nabble reply web interface does not even quote properly, very strange thing, or i do something wrong. ··· On 2010-04-13 18:13:40 +0200, K. -Michael Aye said: On 2010-04-13 10:18 AM, K. -Michael Aye wrote:
https://discourse.matplotlib.org/t/ipython-pylab-switch-gdal-enthought/13460
CC-MAIN-2021-43
refinedweb
563
81.83
code: #include "list.h"main(int argc, char *argv[]) { int i, N = atoi(argv[1]), M = atoi(argv[2]); Node t, x; initNodes(N); for (i = 2, x = newNode(1); i <= N; i++) { t = newNode(i); insertNext(x, t); x = t; } while (x != Next(x)) { for (i = 1; i < M ; i++) x = Next(x); freeNode(deleteNext(x)); } printf("%d\n", Item(x)); }. #include "list.h"main(C cArg, SZ rgszArg[]) { I iNode, cNodes = atoi(rgszArg[1]), cNodesToSkip = atoi(rgszArg[2]); PNODE pnodeT, pnodeCur; InitNodes(cNodes); for (iNode = 2, pnodeCur = PnodeNew(1); iNode <= cNodes ; iNode++) { pnodeT = PnodeNew(i); InsertNext(pnodeCur, pnodeT); pnodeCur = pnodeT; })); } printf("%d\n", Item(nodeCur)); } So what changed? First off, all the built-in types are gone. Hungarian can use them, but not for most reasons. Next, the hungarian types "I", "C", and "SZ" are used to replace indexes, counts and strings. Obviously the C runtime library functions remain the same. Next, I applied the appropriate prefix - i for indices, c for counts, p<type> for "pointer to <type>". The Node type was renamed PNODE, in Hungarian, all types are uppercased. In Hungarian, the name of the routine describes the return value - so a routine that returns a "pointer to foo" is named "Pfoo<something relevent to which pfoo is being returned>". Looking at the transformation, I'm not sure it's made the code any easier to read, or more maintainable. The next examples will try to improve things.
http://blogs.msdn.com/b/larryosterman/archive/2004/11/09/254561.aspx
CC-MAIN-2014-42
refinedweb
242
63.9
In “Mother of Components: Processor Expert with NXP Kinetis SDK V2.0 Projects” I presented an approach how to use Processor Expert components with the NXP Kinetis SDK. This article is a tutorial how to create a blinking LED project with that approach, using McuOnEclipse Processor Expert components and the Kinetis SDK V2.0. As board the FRDM-K22F is used: Tutorial: Blinky with Processor Expert and the Kinetis SDK V2.0 In this quick tutorial I’m showing how use Processor Expert for a ‘blinky’ project on the FRDM-K22F.Similar steps can be used for any other board: simply change the board and port number for the LED. I’m using the Kinetis Design Studio V3.2. Get the Kinetis SDK for the K22F from and have it installed. Then create a SDK V2.0 project using File > New > Kinetis SDK v2.x Project: Select the board, with all drivers and without FreeRTOS: Next, create a Processor Expert project with File > New > Processor Expert Project. Which board/CPU selected does not really matter, but it is better if you select a matching core (Cortex-M0+, M4 or M4F). Just make sure you create a Processor Expert Project *without* SDK support: With this I have a SDK project (FRDM-K22F_SDK_V2.0_PEx) and a ‘Mother’ project (FRDM-K22F_PEx_for_SDK_v2.0): On the FRDM-K22F there are three LED’s for the RGB LED on the board: - Red: PTA1 - Green: PTA2 - Blue: PTD5 💡 if using a different board, you have to check the schematics to find the correct pin settings. For this I’m adding three LED components to the Processor Expert mother project. As the LEDs reference the SDK component, it gets added too: In the SDK component, I specify the SDK to be used: The LED components reference the SDK component, so they know which SDK is used: The LED component is now are using the BitIO sub component: This does not work for the SDK, so we have to use the special SDK BitIO component. I can change this in the LED settings so it uses the SDK_BitIO component instead: Now it uses the SDK_BitIO inherited/sub component: Next I specify the pin I would like to use, in this case PTA1 (Pin number 1 of port GPIOA): I repeat the steps for the other two LEDs (Green (PTA2) and Blue (PTD5)). Depending on the CPU you have selected, there might be a PinSetting component present in the project by default: disable it with the context menu (Component Enabled) so there is a small ‘x’ to show that it is disabled: That’s it for now on the Processor Expert side. Generate code now for the Processor Expert project: Next I’m going to use the generated code. One way is to drag&drop the Generated_Code Folder into the SDK project with CTRL pressed: this will pop up a dialog how I would like to handle the operation: This will create a linked folder, and it will compile whatever files are in this folder. So if I later add/change the Processor Expert files, they will be used in my SDK project. In the SDK project, I need to tell the compiler where he can find the header files. I do this in the SDK project settings like this: There are two Processor Expert files which are specific to the To disable them, I use the context menu on the file(s) and select ‘Exclude Resource from Build’: PinMuxing and Clock Gating First we need to initialize the LED pins used, plus enable the clocks for the peripherals. This is done inside BOARD_InitPins() which is called from main(): The BOARD_InitPins() is in pin_mux., and there I add the following code to clock the ports I’m going to use for the LEDs and configure them as GPIO pins: /* LEDs are on PTA1, PTA2 and PTD5 */ CLOCK_EnableClock(kCLOCK_PortA); CLOCK_EnableClock(kCLOCK_PortD); PORT_SetPinMux(PORTA, 1u, kPORT_MuxAsGpio); PORT_SetPinMux(PORTA, 2u, kPORT_MuxAsGpio); PORT_SetPinMux(PORTD, 5u, kPORT_MuxAsGpio); 💡 I was considering adding the Pin Muxing to the LED Init() code too. But to be consistent with the way how the SDK is doing things, I had not done it there. Blinky Code Next, the application code gets added to main.c. I add the includes for the LED components plus the SDK port I/O interface: #include "LED1.h" #include "LED2.h" #include "LED3.h" #include "fsl_port.h" Before using the components, I have to initialize them: /* init components */ LED1_Init(); LED2_Init(); LED3_Init(); 💡 In normal Processor Expert that driver initialization is done in PE_low_level_init(). Next I could blink the LEDs with something like this: /* run the code */ for(;;) { LED1_Neg(); LED2_Neg(); LED3_Neg(); } That’s it! Compile and run the code on the board, and you should see the LED’s blinking while stepping with the debugger. If you would like to have a delay, I can add the Wait component to the Processor Expert mother project: Generate code again, and add: #include “WAIT1.h” to main.c and I can have a blinky code like this which changes the LED color every 500 ms: for(;;) { LED1_On(); WAIT1_Waitms(500); LED1_Off(); LED2_On(); WAIT1_Waitms(500); LED2_Off(); LED3_On(); WAIT1_Waitms(500); LED3_Off(); } Summary I believe I have found a good way to carry over most of the investments into Processor Expert I have: I’m able to continue my driver code with the Kinetis SDK V2.0 with the usage of a ‘mother’ Processor Expert project. That approach is similar to the ‘Processor Expert Driver Suite’ approach as it exits to generate code for Keil or IAR IDEs. Of course with that approach I’m not able to get the expert knowledge of Processor Expert about clocks and pin muxing. But at least I can continue to use Processor Expert components in the new SDK V2.0 world. Projects used in this article are GitHub here: and Happy Experting 🙂 Links - Concept how to use Processor Expert with Kinetis SDK V2.0: Mother of Components: Processor Expert with NXP Kinetis SDK V2.0 Projects - Projects on GitHub: and - Latest McuOnEclipse Components release: McuOnEclipse Components: 8-May-2016 Release - NXP Kinetis SDK: - Article about SDK V2.0: First NXP Kinetis SDK Release: SDK V2.0 with Online On-Demand Package Builder - Overview about Processor Expert: this is really good idea, thanks a lot Pingback: Tutorial: Using Eclipse with NXP MCUXpresso SDK v2 and Processor Expert | MCU on Eclipse Hi Erich, In the article, you say “Get the Kinetis SDK for the K22F from and have it installed.” So I went there, selected a processor (K64FX512) and toolchain (MCUXpresso IDE, even though I’m actually using Eclipse Oxygen with the GNU ARM toolchain and your plugins). I get a zip file, SDK_2.3.0_MK….zip. Now what? I need to get to the point where I have the “File > New > Kinetis SDK v2.x Project” option available. For the File > New > Kinetis SDK v2.x project option to be available, you have to install the NPW (New Project Wizard) plugin which is part of the Processor Expert plugins. So you have to install it if you don’t see that menu. As for the SDK zip file: you need to have to generate the SDK on the McuXpresso server with the KDS option, download the zip file and then point from the NPW to that folder where you have extracted the SDK. I hope this helps, and Happy New Year! Erich Hmm. “…you have to install the NPW (New Project Wizard) plugin which is part of the Processor Expert plugins…” which I get from…? The problem I’m having is that this site is so packed full of information it’s hard to find a series of articles that start from installing Eclipse and ends with, say, blinky with Processor Expert, or blinky with FreeRTOS. The list that I’ve been following so far is this: 1. Installing Eclipse Oxygen with GNU ARM MCU plugins, ARM Cortex Build tools, and a Debug probe interface. 2. Installing Processor Expert: 3. Installing MCUOnEclipse Components: (how to): (latest announce): 4. Blinky with Processor Expert: As you can see, there must be a missing step between 3 and 4. Or maybe 2 and 3. Or maybe 1 and 2. So the immediate question is, what am I missing? More generally, it would really help, I think, to have a roadmap like the above posted somewhere prominently. Do you have the following menu items present? – File > New > Kinetis SDK 1.x projects and File > New > Processor Expert Projects? The File > New > Kinetis SDK 2.x projects is from the KDS update site (maybe I have missed to describe that step?). Use the menu Help > Install new software and point to as update site. There is the ‘New Kinetis SDK 2.x Project Wizard as plugin to install. I hope this helps, Erich Yes, that is what was missing. Thank you! great! I have added a note about this in the article Ooh, it looks like installing the New Kinetis SDK 2.x Project Wizard fails: Missing requirement: New Kinetis SDK 2.x Project Wizard 2.1.0.201703201355 (com.nxp.feature.npw4sdk.feature.group 2.1.0.201703201355) requires ‘ilg.gnuarmeclipse.core 2.6.1’ but it could not be found. The only thing I have close to that is ilg.gnumcueclipse.core_4.2.1. I think maybe NXP is just not keeping up with the latest releases of the GNU ARM plugins. Yes, I faced issues like this with the latest plugins too. That’s why this is the version of the plugins I have installed: ilg.gnuarmeclipse.repository-3.4.1-201704251808 You can download zip files from previous GnuMcuEclipse plugins from the GnuMcuEclipse github site. Argh, that’s frustrating! I think I’m just going to try MCUXpresso. In the end, I just want to write code. I really appreciate your replies and your commitment to the site, thank you! LikeLiked by 1 person
https://mcuoneclipse.com/2016/05/20/tutorial-blinky-with-nxp-kinetis-sdk-v2-0-and-processor-expert/
CC-MAIN-2021-17
refinedweb
1,656
71.04
This topic describes how Windows Management Instrumentation (WMI) scripting is used with Microsoft Speech Server (MSS). Use WMI scripting to perform tasks such as saving and loading configuration settings, monitoring servers, starting and restarting services, or initiating an action when a server shuts down. For information about administrative scripts, see Running Server Scripts. For more information on scripting in Microsoft Windows, see Windows Script in the MSDN Library. For more information about WMI scripting, see the following topics in the MSDN Library: WMI Scripting Primer: Part 1 WMI Scripting Primer: Part 2 The scripts that ship with MSS are to be used as operations tools. The sample scripts are located in \Program Files\Microsoft Speech Server\Administrative Tools\Scripts. Run scripts from the command line, or by double-clicking the script file in Windows Explorer. The following procedure describes how to run scripts from the command line. Click Start, click Run, type cmd and press ENTER. In the command window, browse to the folder \Program Files\Microsoft Speech Server\Administrative Tools\Scripts. To run a script, type cscript followed by the script file name, and press ENTER. Use either VBScript or Windows JScript to write WMI scripts. For more information on VBScript and JScript, browse the MSDN Library online to the path Web Development\Scripting\Windows Script Technologies. See the separate sections on VBScript and JScript. Notepad is a good authoring tool. The script must be saved as a text file with a .vbs extension. The following procedure provides some sample code, and steps through the mechanics of creating a valid VBScript file and running the script contained in the file. Warning Microsoft JScript .NET issues a security exception when partially trusted code attempts to create a Function object using either the eval or function constructs. For instructions on how to work around this issue, see this Microsoft Knowledge Base article. Open Notepad. The following sample code will start the Calculator application. Copy the code into Notepad. set process = GetObject("winmgmts:{impersonationLevel=impersonate}!Win32_Process") result = process.Create ("calc.exe",null,null,processid) WScript.Echo "Method returned result = " & result WScript.Echo "Id of new process is " & processid In Notepad, on the File menu, select Save As, enter StartNote.vbs as the file name, and press ENTER. To run the script, click Start, click Run, type cmd and press ENTER. In the command window, browse to the folder containing StartNote.vbs, type cscript StartNote.vbs and press ENTER. The following sections provide examples of how WMI scripts can be used to perform common operations in administering MSS installations. Use the Properties_ property of the SWbemObject object to access property values of MSS objects. Use the Put_ method to update a server instance with new property values. The following code sample demonstrates how this can be done. For more information on getting and setting property values, see Getting and Setting WMI Object Properties. ' Get telephony server object on specified host Set objTelServer = GetObject("winmgmts:\\" & strHostname & "\root\MSS:TAS=@") ' Print configuration summary For Each p in objTelServer.Properties_ WScript.Echo p.Name, " = ", p.Value objTelServer.Properties_("ScriptTimeout")=5 'Update WMI with a modified instance objTelServer.Put_ For information on using methods to stop and start a service, see Scripting Win32 Provider Methods. Events are real-time notifications that something of interest has changed in a WMI-managed resource. Use WMI scripting to subscribe to events that indicate a change to a WMI-managed resource. For example, it is possible to subscribe to an event that indicates when the amount of space on a logical disk drive drops below an acceptable threshold. To monitor for changes in a service, use following query like this one, written in WMI Query Language: ' Format and issue a query for telephony server service change notification events. strQuery = "SELECT * FROM __InstanceModificationEvent WITHIN 2 " strQuery = strQuery & "WHERE TargetInstance ISA 'TAS'" Create a sink object to receive event callbacks, and a subroutine to handle the events. The following code sample demonstrates a sink subroutine, and the statement creating the sink. ' Handler for WMI delivered service events. Sub SINK_OnObjectReady(objWbemObject, objAsyncContext) Set t = objWbemObject.TargetInstance Set p = objWbemObject.PreviousInstance If p.State <> t.State Then WScript.Echo t.Path_.Server & ": " & Time & ": " & p.State & " -> " & t.State & " - " & t.Path_.Class End If End Sub ' Connect to MSS namespace on the specified machine. Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objNamespace = objLocator.ConnectServer(strHostname, "root/MSS") ' Create a sink to process the service change events asynchronously. Set objSink = WScript.CreateObject("WbemScripting.SWbemSink", "SINK_") Use the SWbemServices.ExecNotifcationQueryAsync method to execute the event query and identify the event sink, as shown in the following statement. ' Execute query and await new events objServices.ExecNotificationQueryAsync objSink, strQuery Understanding WMI Scripting | Running Server Scripts | Windows Management Instrumentation Reference
http://technet.microsoft.com/en-us/library/bb684732.aspx
crawl-002
refinedweb
787
50.73
The Atlassian Community can help you and your team get more value out of Atlassian products and practices. Need to remove Move , Clone issue ,Convert to issue from More drop down for Bug Issue type. I do not want to use permission Scheme. I would like to use script runner for this issue. I have to question why you don't want to use permissions for this, and also why you think there is any point in it. What is the benefit of removing an option from a menu, one that you still want people to be able to do, and now have to cast around with fiddly url building to be able to do it? What do you gain from making things harder for people to find? Hi @Garden16_ For this you have to use Script Fragment feature of Script runner. In Script Fragment use "Hide System or Plugin UI element". In Hide What section, select clone-issue,move-issue and subtask-to-issue (if you want to stop conversion) In Condition Block , enter the condition based on your requirement , include subtask issue type as well you want to hide Convert to Issue button from More. import com.atlassian.jira.component.ComponentAccessor if(jiraHelper.project?.key == "VY" && issue.issueType.name == "Bug" ) { false } else { true } Hope it Works for you! Thanks, V.
https://community.atlassian.com/t5/Adaptavist-questions/Need-to-remove-Move-Clone-issue-Convert-to-issue-from-More-drop/qaq-p/1905120
CC-MAIN-2022-40
refinedweb
222
70.84
Not every bad block means that the entire disk will deteriorate soon. I've had fixed installed disks where bad blocks were limited to a small area, and I kept using those disks (one for years) by simply allocating the bad blocks with a file, and making sure I never delete that file. In other cases, especially if a drop of the disk led to the damage (which is especially likely with an iPod, though), the head might have gotten damaged, a moving part come loose or bent, and that's more likely to lead to more errors soon. In any case - what this hint does is to try to repartition the disk, creating a unused space around the bad area. The description looks a little clueless, as it doesn't even take into account that the bad area could be anywhere. However, if you have a little bit of a clue about how a hard disk is layed out, you can use my "iBored" to scan the entire disk for bad blocks. Then you can use those bad block numbers to decide how to partition your disk to avoid the bad areas. Or, even mark the area as bad, especially if you use the FAT format instead of HFS (in FAT, it's much easier to mark blocks as bad). I could even add such an option to iBored, if someone would promise all my dreams would come true in return (or, pay me for it) :) true, while not all indications of bad blocks will mean it will get worse, it does fairly often. And how much do you trust a drive that has already shown signs of a problem? I guess with an iPod it isn't as much of an issue, since the computer is the backup for the iPod. With a computer, I replace the drive every time, if they want to use the (possibly) failing drive as a backup or something like that, but not as a primary storage device. Yeah, but what's your point? The article was specifically about making the best out of a bad situation - and for that's it valid, especially because an iPod usually only hold a backup of your iTunes lib, not original data - so if that iPod should die eventually, nothing is lost. No one suggests doing that with your main hard drive, neither me nor the one who posted the article. Don't have an account yet? Sign up as a New User Visit other IDG sites:
http://hints.macworld.com/comment.php?mode=view&cid=122453
CC-MAIN-2014-15
refinedweb
421
72.19
Installing Magic Installing magic Recently a method for imputing single cell gene expression matricies was posted on biorxiv by David van Dijk et al., called magic (Markov Affinity-based Graph Imputation of Cells). I’ve been analysing single cell RNA-seq data recently, and this method looks like it could be useful when trying to find co-transcriptional networks, as single cell data suffers from dropout which makes finding co-transcriptional networks hard. I had lots of problems getting magic installed and running, so will document them here for future reference. Firstly, there seems to be an error in the setup.py script where it looks for a non-existent /data directory that should contain test data. Running pip3 install . as instructed in the readme resulted in an error, and I have raised an issue on github. Commenting out the last few lines of the setup.py script seemed to provide a temporary fix. The next problem was getting Tk to work properly with python. Tk is a GUI toolkit and not park of python itself. Chances are that Tk is installed somewhere on your computer, and the problem is that python doesn’t know where it is. After trying lots of different things, the solution I found was to install python3 using the mac installer and launching IDLE, as this finds and links the Tk installation with python at runtime. From the python website: The Python for Mac OS X installers downloaded from this website dynamically link at runtime to Tcl/Tk macOS frameworks. I then found the path of the newly installed python3 (it was symlinked to /usr/local/bin/python3 for me) and used this to create a new virtualenv: mkvirtualenv -p /usr/local/bin/python3 py3 I then installed magic again in the virtualenv (from the github repo): pip3 install . Next I installed all the jupyter stuff. It’s important to link the right ipython kernel to the jupyter notebook, otherwise it will seem like you still don’t have access to Tk, even though at this point you can sucessfully import tkinter in python3. To do this, install jupyter, ipython, the ipython kernel, and then link the kernel: pip install jupyter ipython ipykernel python3 -m ipykernel install --user Now you should be able to import magic without any problems, and use it in a jupyter notebook. You can also start the GUI by running the magic_gui.py script: python3 magic_gui.py
http://timoast.github.io/blog/2017-03-03-installing-magic/
CC-MAIN-2020-16
refinedweb
406
69.72
In the v14 environment: rdm-convert --sed-d dbname.ddl Use '–sed' rather than '–sed-d' if your original DDL is compiled without the '-d' (duplicates allowed) command-line option. Two files will be generated by rdm-convert: dbname.sdl and dbname.sed. The .sdl file is SQL DDL, and you will use it to compile your new database schema. The .sed file is a script for the 'sed' stream editor, used for editing source code with mechanical changes. $ rdm-sql rdm-sql: create database dbname; rdm-sql: .r dbname.sdl rdm-sql: commit; It is likely that the '.r dbname.sdl' failed because of compilation errors due to SQL reserved words. If this happens, perform the following steps and repeat step 2 above. s/\bINV_TRANS\b/REF_INV_TRNSCTN/gand s/\bTRANS\b/TABLE_TRNSCTN/g import into table1 from file "table1.csv"; import into table2 from file "table2.csv"; ... import into tableN from file "tableN.csv"; Then run rdm-sql to do the work: $ rdm-sql rdm-sql: use dbname; rdm-sql: .r imports.sql rdm-sql: commit; A Note about ordering of the imports: RDM 12 set definitions are implemented in v14 SQL as foreign keys, where the member record (SQL row) references the primary key of the owner record. This will happen automatically when you import data if all referenced rows are created before the referencing rows. In other words, all owner records must exist before the member records exist. Study your original DDL file, or the SDL file produced by rdm-convert. Order the insert statements such that the owner/primary rows are inserted before the member/referencing rows. Sometimes there are circular references in databases, where one or more tables create a cycle. This normally creates a tree structure in the rows. It means that it is not possible to order the imports in a way where all primaries exist first. If you have this situation, perform the following steps: sed –i -f dbname.sed *.c *.cpp rdm-compile –s dbname.sdl If your source files include dbname.h, you will need to change the include statement to: #include "dbname_structs.h" If your source files include dbname_dbd.h, change to dbname_cat.h. If you include system header files (those with angle-brackets like <stdio.h>), you will need to move all of them to follow the RDM headers.
https://docs.raima.com/rdm/14_1/create.html
CC-MAIN-2019-09
refinedweb
391
68.16
(For more resources on Python, see here.) Introduction Animation is about making graphic objects move smoothly around a screen. The method to create the sensation of smooth dynamic action is simple: - First present a picture to the viewer’s eye. - Allow the image to stay in view for about one-twentieth of a second. - With a minimum of delay, present another picture where objects have been shifted by a small amount and repeat the process. Besides the obvious applications of making animated figures move around on a screen for entertainment, animating the results of computer code gives you powerful insights into how code works at a detailed level. Animation offers an extra dimension to the programmers’ debugging arsenal. It provides you with an all encompassing, holistic view of software execution in progress that nothing else can. Static shifting of a ball We make an image of a small colored disk and draw it in a sequence of different positions. How to do it… Execute the program shown and you will see a neat row of colored disks laid on top of each other going from top left to bottom right. The idea is to demonstrate the method of systematic position shifting. # moveball_1.py #>>>>>>>>>>>>> from Tkinter import * root = Tk() root.title("shifted sequence") cw = 250 # canvas width ch = 130 # canvas height chart_1 = Canvas(root, width=cw, height=ch, background="white") chart_1.grid(row=0, column=0) # The parameters determining the dimensions of the ball and its # position. # ===================================== posn_x = 1 # x position of box containing the ball (bottom) posn_y = 1 # y position of box containing the ball (left edge) shift_x = 3 # amount of x-movement each cycle of the 'for' loop shift_y = 2 # amount of y-movement each cycle of the 'for' loop ball_width = 12 # size of ball - width (x-dimension) ball_height = 12 # size of ball - height (y-dimension) color = "violet" # color of the ball for i in range(1,50): # end the program after 50 position shifts posn_x += shift_x posn_y += shift_y chart_1.create_oval(posn_x, posn_y, posn_x + ball_width, posn_y + ball_height, fill=color) root.mainloop() #>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How it works… A simple ball is drawn on a canvas in a sequence of steps, one on top of the other. For each step, the position of the ball is shifted by three pixels as specified by the size of shift_x. Similarly, a downward shift of two pixels is applied by an amount to the value of shift_y. shift_x and shift_y only specify the amount of shift, but they do not make it happen. What makes it happen are the two commands posn_x += shift_x and posn_y += shift_y. posn is the abbreviation for position. posn_x += shift_x means “take the variable posn_x and add to it an amount shift_x.” It is the same as posn_x = posn_x + shift_x. Another minor point to note is the use of the line continuation character, the backslash “”. We use this when we want to continue the same Python command onto a following line to make reading easier. Strictly speaking for text inside brackets “(…)” this is not needed. In this particular case you can just insert a carriage return character. However, the backslash makes it clear to anyone reading your code what your intention is. There’s more… The series of ball images in this recipe were drawn in a few microseconds. To create decent looking animation, we need to be able to slow the code execution down by just the right amount. We need to draw the equivalent of a movie frame onto the screen and keep it there for a measured time and then move on to the next, slightly shifted, image. This is done in the next recipe. Time-controlled shifting of a ball Here we introduce the time control function canvas.after(milliseconds) and the canvas.update() function that refreshes the image on the canvas. These are the cornerstones of animation in Python. Control of when code gets executed is made possible by the time module that comes with the standard Python library. How to do it… Execute the program as previously. What you will see is a diagonal row of disks being laid in a line with a short delay of one fifth of a second (200 milliseconds) between updates. The result is shown in the following screenshot showing the ball shifting in regular intervals. # timed_moveball_1.py #>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from Tkinter import * root = Tk() root.title("Time delayed ball drawing") cw = 300 # canvas width ch = 130 # canvas height chart_1 = Canvas(root, width=cw, height=ch, background="white") chart_1.grid(row=0, column=0) cycle_period = 200 # time between fresh positions of the ball # (milliseconds). # The parameters determining the dimensions of the ball and it's # position. posn_x = 1 # x position of box containing the ball (bottom). posn_y = 1 # y position of box containing the ball (left edge). shift_x = 3 # amount of x-movement each cycle of the 'for' loop. shift_y = 3 # amount of y-movement each cycle of the 'for' loop. ball_width = 12 # size of ball - width (x-dimension). ball_height = 12 # size of ball - height (y-dimension). color = "purple" # color of the ball for i in range(1,50): # end the program after 50 position shifts. posn_x += shift_x posn_y += shift_y chart_1.create_oval(posn_x, posn_y, posn_x + ball_width, posn_y + ball_height, fill=color) chart_1.update() # This refreshes the drawing on the canvas. chart_1.after(cycle_period) # This makes execution pause for 200 # milliseconds. root.mainloop() How it works… This recipe is the same as the previous one except for the canvas.after(…) and the canvas.update() methods. These are two functions that come from the Python library. The first gives you some control over code execution time by allowing you to specify delays in execution. The second forces the canvas to be completely redrawn with all the objects that should be there. There are more complicated ways of refreshing only portions of the screen, but they create difficulties so they will not be dealt with here. The canvas.after(your-chosen-milliseconds) method simply causes a timed-pause to the execution of the code. In all the preceding code, the pause is executed as fast as the computer can do it, then when the pause, invoked by the canvas.after() method is encountered, execution simply gets suspended for the specified number of milliseconds. At the end of the pause, execution continues as if nothing ever happened. The canvas.update() method forces everything on the canvas to be redrawn immediately rather than wait for some unspecified event to cause the canvas to be refreshed. There’s more… The next step in effective animation is to erase the previous image of the object being animated shortly before a fresh, shifted clone is drawn on the canvas. This happens in the next example. The robustness of Tkinter It is also worth noting that Tkinter is robust. When you give position coordinates that are off the canvas, Python does not crash or freeze. It simply carries on drawing the object ‘off-the-page’. The Tkinter canvas can be seen as just a tiny window into an almost unlimited universe of visual space. We only see objects when they move into the view of the camera which is the Tkinter canvas.
https://hub.packtpub.com/python-graphics-animation-principles/
CC-MAIN-2018-17
refinedweb
1,189
64.51
Copyright © 2006-2007. In this document the term HTML is used to refer to the XHTML dialect of HTML [XHTML]. GRDDL works through associating transformations with an individual document either through direct inclusion of references or indirectly through profile and namespace documents. For XML dialects the transformations are commonly expressed using XSLT 1.0, although other methods are permissible. Generally, if the transformation can be fully expressed in XSLT 1.0 then it is preferable to use that format since GRDDL processors should be capable of interpreting an XSLT 1.0 document. While anyone can create a transformation, a standard transform library has been provided that can extract RDF that's embedded directly in XML or HTML using <rdf:RDF> tags as well as extract any profile transformations. GRDDL transformations can be made for almost any dialect, including microformats. This document may be read in conjunction with the GRDDL Use Cases [GRDDL-SCENARIOS] which describes a series of common scenarios for which GRDDL may be suitable. Readers desiring the technical details of the GRDDL mechanism or wishing to implement GRDDL themselves should refer to the GRDDL Specification [GRDDL]. One persistent and troublesome problem is discovering precisely when and where your friends are together so that you can schedule a meeting. In our example, a frequent traveller called Jane is trying see if at any point next year she can schedule a meeting with all three of her friends, despite the fact that all of her friends publish their calendar data in different ways. With GRDDL, she can discover if they can meet up without forcing her friends to all use the same centralized Web-based calendar system. GRDDL provides a number of ways for GRDDL transformations to be associated with content, each of which is appropriate in different situations. The simplest method for authors of HTML content is to embed a reference to the transformations directly using a link element in the head of the document. Microformats are simple conventions for embedding semantic markup for a specific domain in human-readable documents. One of Jane's friends has marked up their schedule using the hCalendar microformat. The hCalendar microformat uses HTML class attributes to associate event-related semantics with elements in the markup, as shown in Robin's calendar: <-23">22</abbr> </li> <li class="vevent"> <strong class="summary">New line review</strong> in <span class="location">Cologne, Germany</span>: <abbr class="dtstart" title="2006-10-26">Oct 26</abbr> to <abbr class="dtend" title="2006-10-28">27</abbr> </li> <li class="vevent"> <strong class="summary">Clothing 2006</strong> in <span class="location">Rome, Italy</span>: <abbr class="dtstart" title="2006-12-01">Dec 1</abbr> to <abbr class="dtend" title="2006-12-06">5</abbr> </li> </ol> </li> <li>2007 <ol> <li class="vevent"> <strong class="summary">Web Design Conference</strong> in <span class="location">Edinburgh, UK</span>: <abbr class="dtstart" title="2007-01-08">Jan 8</abbr> to <abbr class="dtend" title="2007-01-11">10</abbr> </li> <li class="vevent"> <strong class="summary">Board Review</strong> in <span class="location">New York, USA</span>: <abbr class="dtstart" title="2007-02-23">Feb 23</abbr> to <abbr class="dtend" title="2007-02-25" (see HTML specification, Meta data profiles). The profile URI for GRDDL is and by including this URI in her document Robin is declaring that her markup can be interpreted using GRDDL. The resulting HTML looks like this <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xml: <head profile=""> <title>Robin's Schedule</title> </head> <body> ... Then she needs to add a link element containing the reference to the specific GRDDL transformation for converting HTML containing hCalendar patterns into RDF. She can either write her own GRDDL transformation or re-use an existing transformation, and in this case there's one available for calendar data. The link element contains the token transformation in the rel attribute and the URI of the GRDDL transformation itself for extracting RDF is given by the value of the href attribute. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xml: <head profile=""> <title>Robin's Schedule</title> <link rel="transformation" href=""/> </head> <body> ... The profile URI in the Robin's new GRDDL-enabled calendar file signals that the receiver of the document may look for link elements with a rel attribute containing the token transformation and use any or all of those links to determine how to extract the data as RDF from Robin's calendar. Individual publishers of data using popular vocabularies can also give users of their data of being transformed into RDF without having to even add any new markup to individual documents. This is done by referencing GRDDL transformations in a profile document referenced in the head of the HTML. Other XML vocabularies may use their namespace documents for the same purpose. or namespace document. This method requires no work from the content author of individual documents but requires that the profile document contain a reference to a GRDDL transformation and be accessible to the GRDDL client, and so may require work from the creator and maintainer of the dialect. Yet this is a good use of time, since once the transformation has been linked to the profile document, all the users of the dialect get the added value of RDF. Another of Jane's friends, David, has chosen to mark up his schedule using Embedded RDF. Embedded RDF has a link to a GRDDL transformation in its profile document. < the profile may contain references to GRDDL transformations that can be applied to David's calendar, even if David does not explicitly link these transformations to his calendar. Jane's agent applies a standard transformation for profile documents to the Embedded RDF profile document in order to find a link to a transformation for all Embedded RDF documents, including David's HTML document. This transformation for all Embedded RDF documents,, is identified in the profile document using the rel attribute of profileTransformation. This process may be replicated with any vocabulary that has a profile URI. Microformat-enabled web-pages on the Web may not be valid XHTML. For this purpose, one may wish to use a program like Tidy (or some other algorithm) to make the web-page equivalent to valid XHTML before applying GRDDL [GRDDL-SCENARIOS]. Also, many microformats may not have profiles with transformations. A user can always take matters into their own hands by applying a GRDDL transformation for a microformat directly to the web page in order to get RDF. This is risky since if the author of the document or microformat vocabulary does not explicitly license a GRDDL transformation, the responsibility for those RDF is now in the hands of the user. Jane would like to meet with David and Robin, but does not want to manually check all their calendars, a process that is tiresome and prone to human error. To solve this problem, Jane decides to use a GRDDL implementation that converts both Robin and David's calendar to RDF. Jane stores her calendar directly in RDFa, a way of embedding RDF directly into HTML. She can use a GRDDL Transformation for RDFa to convert RDFa to RDF/XML, in order to get her entire schedule in RDF/XML. One of the advantages of the RDF data model is that RDF data can be easily merged by adding it to a RDF store, so Jane can merge and query all the calendars together once they are transformed into RDF. Jane uses SPARQL [SPARQL] to query her data, which automatically merges the calendar data sources before running the query. SPARQL (The SPARQL Protocol and RDF Query Language) is a query language for RDF with a syntax similar to well-known data-base query languages. Online forms for submitting SPARQL queries can be found at on the this wiki.. Her scheduling SPARQL query looks like this: PREFIX ical: <> PREFIX xs: <> SELECT ?start1 ?stop1 ?loc1 ?summ1 ?summ2 ?summ3 FROM <> FROM <> FROM <> WHERE { ?event1 a ical:Vevent; ical:summary ?summ1 ; ical:dtstart ?start1 ; ical:dtend ?stop1 ; ical:location ?loc1. ?event2 a ical:Vevent; ical:summary ?summ2 ; ical:dtstart ?start2; ical:dtend ?stop2; ical:location ?loc2. ?event3 a ical:Vevent; ical:summary ?summ3 ; ical:dtstart ?start3; ical:dtend ?stop3; ical:location ?loc3. FILTER ( ?event1 != ?event2 && ?event2 != ?event3 && ?event1 != ?event3 ) . FILTER ( xs:string(?start1) = xs:string(?start2) ). FILTER ( xs:string(?stop1) = xs:string(?stop2) ). FILTER ( xs:string(?loc1) = xs:string(?loc2) ). FILTER ( xs:string(?start1) = xs:string(?start3) ). FILTER ( xs:string(?stop1) = xs:string(?stop3) ). FILTER ( xs:string(?loc1) = xs:string(?loc3) ). FILTER ( xs:string(?start3) = xs:string(?start2) ). FILTER ( xs:string(?stop3) = xs:string(?stop2) ). FILTER ( xs:string(?loc3) = xs:string(?loc2) ). FILTER ( xs:string(?summ1) <= xs:string(?summ2) ). FILTER ( xs:string(?summ2) <= xs:string(?summ3) ). } The SELECT line determines which variable will appear in the results, here one of the start dates, one of the stop dates, a location and a summary. The FROM lines identify the data sources to use in the query, in this case the RDF/XML derived from Jane, David and Robin's original documents. The WHERE section provides a pattern which can match three events. The first block of FILTERs match up identical start and stop dates as well as locations between the three events. These values, which may be differently typed, are simplified to simple literals with the str() operator. The final two FILTER lines are idiomatic expressions which prevent multiple results returning due to the interchangeability of the variables. The relevant results of querying the results of GRDDL is: So Jane discovers her friends Robin and David are both in town with her in Edinburgh on January 8th through 10th for the Web Design Conference. Since this is such as useful SPARQL script, she considers bundling it up as a web service so her friends can use it easily without writing SPARQL from scratch. In this example, we will combine data dialects as different as reviews and social networks in order to guarantee the booking a hotel with a high review from a trusted friend. This process of booking a hotel highlights the role of GRDDL in aggregating data from a variety of different formats and of using RDF as a common format to "mashup" all sorts of data, not just calendar data. We can of course write code in our favorite language to extract and combine these calendar data formats without using RDF. This ability to combine and query multiple kinds of microformat data from different web-pages shows functionality that RDF delivers that simple extraction of microformats to a custom data format can not. This example is similar to the guitar review use case. Jane is pleased that she has found out all her friends can finally meet up in Edinburgh. However, she is not sure of where to stay in Edinburgh, so she decides to check reviews. There are various special interest publications online which feature hotel reviews, and blogs which contain reviews by individuals. The reviewers include friends and colleagues of Jane and people whose opinion Jane values (e.g. friends and people whose reviews Jane has found useful in the past). There may also be reviews planted by hotel advertisers which offer biased views in an attempt to attract customers. First, Jane needs to get a list of people she considers trusted sources into some sort of machine readable document. One choice would be FOAF (Friend of a Friend), a popular RDF vocabulary for describing social networks of friends and personal data. Other choices include a collection of contacts stored in vCard using RDF [VCARD]. Another choice is to use microformats. A microformat that allows for more information about friends to be gleaned from the document is XFN, " XHTML Friends Network". Examples of such relationships are friends, colleagues, co-workers, and so on, as given in this example file. Since XFN relationships are embedded in anchor ( a) elements, they can be expressed in RDF in a variety of ways. Given Jane's HTML document uses the XFN microformat, a GRDDL transformation can extract RDF data. These descriptions would allow a RDF spider (a "scutter") to follow links to additional RDF content that may include more XFN, vCard, or FOAF descriptions. Jane's XFN list, is given as: <html xmlns="" xml: <head profile=""> <link rel="transformation" href="grokXFN.xsl" /> > This XFN file can be converted to RDF with the use of another GRDDL Transform for XFN, resulting in the example RDF result file. Hotel review sites include a number of reviews, including some in Edinburgh. This particular hotel review file is also marked up with the hReview that we can also convert to RDF using a transform, resulting in a RDF version of the hotel reviews. A portion of the hotel file example in HTML is given below to illustrate the use of the hReview microformat: <html xmlns="" xml: <head profile=""> <title>Hotel Reviews from Example.com</title> <link rel="transformation" href=""/> </head> <div class="hreview" id="_665"> <div class="item vcard"> <b class="fn org">Witch's Caldron Hotel, Edinburgh</b> <ul> <li> <a class="url" href="">Homepage</a> </li> </ul> <span><span class="rating">5</span> out of 5 stars</span> <ul> <li class="adr"> <div class="type">intl postal parcel work</div> <div class="street-address">313 Cannongate</div> <div><span class="locality">Edinburgh</span>, <span class="postal-code">EH8 8DD </span> <span class="country-name">United Kingdom</span></div> </li> </ul> <div class="tel"><abbr class="type" title="work msg">Homepage</abbr>: <a class="value" href="tel:+44 1317861235">+44 1317862235</a></div> With this combined "mashed-up" data we can find Jane's friends and find the hotel reviews that those friends created. Using GRDDL we can glean information, including the ratings, about the hotels. Once we have this data as RDF we can "mash-up" the data of the friends and the hotel reviews. Diagram of hotel data relationships In order to find hotels with specific ratings or higher from a group of her trusted friends, we can now query the "mashed-up" data with SPARQL. SPARQL (The SPARQL Protocol and RDF Query Language) is a query language for RDF that can automatically "mash-up" data from multiple sources. PREFIX rdfs: <> PREFIX rev: <> PREFIX vcard: <> PREFIX dc: <> PREFIX foaf: <> SELECT DISTINCT ?rating ?name ?region ?hotelname FROM <> WHERE { ?x rev:hasReview ?review; vcard:ADR ?address; vcard:FN ?hotelname . ?review rev:rating ?rating . ?address vcard:Locality ?region. FILTER (?rating > "2"). ?review rev:reviewer ?reviewer. ?reviewer foaf:name ?name; foaf:homepage ?homepage } This query results in: The query unfortunately gets us all hotels from anywhere in the world with more than 2 stars, so we need to further restrict the results to only hotels in Edinburgh, as we do in this improved query. PREFIX foaf: <> PREFIX rev: <> PREFIX vcard: <> SELECT DISTINCT ?rating ?name ?hotelname ?region FROM <> WHERE { ?x rev:hasReview ?review; vcard:ADR ?address; vcard:FN ?hotelname . ?review rev:rating ?rating . ?address vcard:Locality ?region. FILTER (?rating > "2" && ?region = "Edinburgh"). ?review rev:reviewer ?reviewer. ?reviewer foaf:name ?name; foaf:homepage ?homepage } This results in: Now the results will be hotels with a rating of 2 stars or higher that are located in Edinburgh. The problem with the possible list of results is that there could be biased reviews. The next step is to further restrict the results to only reviews by our trusted list of contacts. Using the XFN links in Jane's page which identifies the URIs of people Jane trusts, by matching URIs we can select only those reviewers who are Jane's friends, as done in this further improved query. PREFIX rdfs: <> PREFIX foaf: <> PREFIX rev: <> PREFIX vcard: <> PREFIX xfn: <> SELECT DISTINCT ?rating ?name ?region ?homepage ?xfnhomepage ?hotelname FROM <> FROM <> WHERE { ?x rev:hasReview ?review; vcard:ADR ?address; vcard:FN ?hotelname. ?review rev:rating ?rating . ?address vcard:Locality ?region. FILTER (?rating > "2" && ?region = "Edinburgh"). ?review rev:reviewer ?reviewer. ?reviewer foaf:name ?name; foaf:homepage ?homepage. ?y xfn:friend ?xfnfriend. ?xfnfriend foaf:homepage ?xfnhomepage. FILTER (?xfnhomepage = ?homepage). } We finally get the result we want: A hotel with a ranking of 5 reviewed by a trusted friend. SPARQL results can be obtained as XML or JSON and can easily be consumed by another application. This can display the results on screen, email them to Jane or it can be pulled into another application to search the web for the best prices on the short list of hotels. GRDDL is also useful for integrating data from general-purpose XML dialects produced by everyday applications. A trove of accumulated information is stored in spreadsheets, and spreadsheets can be saved using a general-purpose XML format. Integrating, reusing, and "mashing-up" information stored in spreadsheets can be valuable, and GRDDL provides a mechanism for accessing this information as RDF in order to accomplish this. In this example, we will specifically consider the problem of gleaning information from Microsoft® Excel spreadsheets, although other spreadsheet-like XML dialects would be able to take advantage of the same basic mechanism. Jane serves as the secretary for a small group with her two friends, David and Robin, that meets once a month. She tracks the attendance at these meetings using a simple Excel spreadsheet, and she starts a new spreadsheet each year. She wants the members of this group to be able to query these accumulated statistics freely, and she recognizes that RDF would support this kind of merging and querying functionality. She decides to use GRDDL to allow any of the members of the group to glean RDF from any of these attendance records and query the data along with any other RDF that may be available. Jane intends to use a GRDDL transformation called xcel-mf2rdf.xsl, which requires the Excel spreadsheet to conform to a particular profile. She first must identify which cells in her spreadsheet are data cells. In the case of an attendance spreadsheet, the data cells are the attendance indicators, and she identifies these cells by giving them the name "Data". She must also identify the header cells. In this case, the header cells are the cells containing names and dates; Jane identifies these cells by giving them the name "Header". Next, Jane gives each data and header cell an additional name, which serves as the local name of the property for that cell. She names the date cells "date", the member name cells "name", and the attendance cells "present". Finally, Jane must set two custom properties globally on the spreadsheet. The first property is called "profile", and this particular profile has profile URI. The second property is called "namespace", and provides the namespace to be used for RDF properties in the GRDDL results; Jane chooses the namespace URI. Attendance spreadsheet with header cells selected Since GRDDL operates on XML documents, she saves her Excel files using the XML dialect that Excel provides. After saving them as XML, she adds the reference to this transformation to the root element of each attendance document. Following the directives of the Excel profile, and including the appropriate GRDDL reference, this is a slice of the resulting spreadsheet document: <="" xmlns: <!-- ... --> <CustomDocumentProperties xmlns="urn:schemas-microsoft-com:office:office"> <profile dt:</profile> <namespace dt:</namespace> </CustomDocumentProperties> <!-- ... --> <Worksheet ss: <Table> <!-- ... --> <Row ss: <!-- ... --> <Cell ss:<Data ss:2006-04</Data><NamedCell ss:<NamedCell ss:</Cell> <!-- ... --> </Row> <!-- ... --> <Row> <Cell ss:<Data ss:Robin</Data><NamedCell ss:<NamedCell ss:</Cell> <!-- ... --> <Cell><Data ss:n</Data><NamedCell ss:<NamedCell ss:</Cell> <!-- ... --> </Row> <!-- ... --> </Table> <!-- ... --> </Worksheet> </Workbook> When processed by a GRDDL-aware agent, a document such as this will be transformed into RDF that preserves the meaning of the spreadsheet: <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns: <!-- ... --> <rdf:Description> <name>Robin</name> <date>2006-04</date> <present>n</present> </rdf:Description> <!-- ... --> </rdf:RDF> Jane and the other members of the group can now use this data in a variety of situations. For example, suppose there exist other records of decisions that were made at these meetings, and the record of one of those meeting was also stored in a spreadsheet that was converted to RDF. Merging these triples with the GRDDL results from the attendance record spreadsheets, Jane can now ask questions such as "who attended the meeting at which we decided to choose the new meeting location?" In SPARQL, the corresponding spreadsheet query is: PREFIX att: <> PREFIX ev: <> SELECT ?name FROM <> FROM <> WHERE { ?event ev:label "choose new meeting location" . ?event ev:date ?date . ?attendance att:date ?date . ?attendance att:name ?name . ?attendance att:present "y" . } Which would give the following results: This indicates that Jane and David were present at the meeting where that decision was made. In this example, the link to the GRDDL transformation was added by hand. However, as shown in detail in the GRDDL specification [GRDDL] for XML Schema, RDF, and HTML namespace documents may also have links to transformations for XML dialects; so a GRDDL-aware agent can also retrieve the namespace document of an XML dialect to find a GRDDL transformation by "following its nose" from the namespace on the root element of the GRDDL source document to the namespace document. The use of a namespace on the root element represents a declaration that the document conforms to the authoritative definition of that namespace as defined by the namespace owner, which may include a transformation from that XML dialect into RDF using GRDDL. There are a few rules of thumb for XML namespace owners wanting to make GRDDL transformations available for their particular dialect of XML. Given an XML document representation, a GRDDL-aware agent that wishes to determine namespace or profile transformations may resolve the namespace or profile URI to obtain a representation. Because of content negotiation and other factors, different GRDDL-aware agents resolving the same namespace or profile URI could receive different representations, which could in turn specify different namespace or profile transformations, which could in turn produce different GRDDL results. In particular, a GRDDL-aware agent that receives a namespace or profile representation that specifies GRDDL transformations may not even be aware that some other representation, specifying more or different transformations, is available. This may pose problems to users that intend to retrieve all of the available GRDDL results associated with the original XML document representation. To help prevent this problem, namespace and profile document authors that choose to serve representations that indicate namespace or profile transformations are advised to ensure that all such representations specify the same namespace or profile transformations. GRDDL can not only be used for combining HTML data, but for XML data as well. This section uses HL7 CDA, a widely deployed XML vocabulary for use in clinical data, as an example of how an XML dialect can be gleaned for RDF. This part of the primer walks through step-by-step the Health Care: Querying XML-based clinical data use-case. Kayode wants to write software components which can extract RDF descriptions from XML HL7 CDA documents transmitted from various devices in a healthcare system using a clinical ontology so that he can merge together clinical reports and use inferences to detect possible problems. CDA is a very well-designed information model and heavily optimized for messaging between computerized hospital systems, and an example CDA document is given. Below is a section of this document that describes the author of a clinical document and the patient that the document describes. This GRDDL-enhanced CDA document can be processed by an XSLT pipeline resulting in a corresponding RDF graph which expresses clinical content in expressive, heavily deployed consensus vocabularies such as Open GALEN, DOLCE: Descriptive Ontology of Linguistics and Cognitive Engineering, FOAF, and an OWL translation of HL7 RIM [OWL]. An example OWL ontology describes the basic concepts in a medical record for the purposes of this example. In a manner similar to enabling the use of GRDDL with HTML, we can add a glean:transformation attribute to the root of the document in order for a GRDDL-aware agent to interpret an HL7 CDA message transmitted using widely-used and interoperable ontologies. <ClinicalDocument xmlns="urn:hl7-org:v3" xmlns: ... <Observation> <id root="10.23.4573.15877"/> <code code="282290005" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED CT" displayName="Imaging interpretation"/> <value xsi: <reference typeCode="SPRT"> <ExternalObservation classCode="DGIMG"> <id root="123.456.2557"/> <code code="56350004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED CT" displayName="Chest-X-ray"/> </ExternalObservation> </reference> </Observation> Once the transformation has been added to the root node of the example HL7 document, a GRDDL-aware agent can then transform the data into this HL7 RDF using the linked XSLT. People sometimes confuse RDF, an abstract graph-based data model [RDFC], with one of its common syntactic serializations, RDF/XML [RDFXML]. RDF can be serialized into a number of different data formats, ranging from RDF/XML to a more human-readable serialization known as Turtle, and so RDF gives the user or application the freedom to choose the syntax most useful for the task at hand. All the merging and querying of data is done on the level of the abstract graphs, not the concrete syntax. So an RDF parser can parse the same GRDDL result expressed in either Turtle, RDF/XML, or another syntax like NTriples, and on the level of the data model, the graph produced will be equivalent. Once the data is expressed in RDF, one can discover several useful facts about the patient's diagnosis that would be unclear in the original XML document. Most important is that the patient's chest X-ray (a cyc:XRayImage or foaf:Image) concludes a medical problem ( cpr:medical-sign). A SNOMED CT code is used which corresponds to a specific term in the description-logic inspired language which SNOMED CT is expressed in. Here's a snippet from the result of running the GRDDL transformation, expressed in the brief Turtle syntax for RDF. [ a cpr:patient-record; dc:date "2000-04-07"; edns:about [ a galen:Patient; foaf:family_name "Levin"; foaf:firstName "Henry"]; foaf:maker [ a foaf:Person; foaf:family_name "Dolin"; foaf:firstName "Robert"]] [ a cpr:clinical-description; cpr:description-of [ a cpr:screening-act; edns:realizes [ a cpr:medical-sign; cpr:interpretant-of [ a foaf:Image; skos:prefLabel "Chest-X-ray"]; skos:prefLabel "Chest hyperinflated"]; skos:prefLabel "Imaging interpretation"]] Given the amount of images in a collection of patient record system, it would be useful if there was some sort of way to easily detect images that were actually diagnoses of medical problems. We can use an OWL class called DiagnosingImage (both a RDF/XML example and Turtle example) that detects if images in the record have been interpreted as having some medical significance. @prefix : <> . @prefix g: <> . @prefix rdfs: <> . g:DiagnosingImage a :Class; :intersectionOf ( <> [ a :Restriction; :onProperty g:indicates; :someValuesFrom <> ] ) . g:indicates a :ObjectProperty; rdfs:comment """Property relating a foaf:Image to a medical sign it indicates"""; rdfs:domain <>; rdfs:range <>; :inverseOf <> . <> a :Thing . Once an OWL reasoner such as the Closed World Machine is run against the merge of the resulting RDF graph with the ontology, the size of our data-set is increased by additional RDF statements indicating that some of the images were actually members DiagnosingImage class. These can then be discovered in the resulting RDF graph by the use of the following example SPARQL medical query: PREFIX cpr: <> PREFIX ex: <> PREFIX skos: <> SELECT ?sign ?image FROM <> WHERE { ?image a ex:DiagnosingImage; ex:indicates [ skos:prefLabel ?sign ] } If we run this SPARQL query over our data-set that has been enlarged by the use of OWL reasoning, then we can detect that a chest has been hyperinflated. Knowing that the original CDA contains the an image with medical significance would be of importance to the patient. In this manner GRDDL allows one to bootstrap Semantic Web data from common XML dialects, and so help these XML dialects interoperate by reference to well-known ontologies and allow their content to be extended by the use of inference. This concludes the GRDDL Primer. Full technical detail of the GRDDL mechanism may be found in the corresponding Gleaning Resource Descriptions from Dialects of Languages (GRDDL) Working Draft. This output can be regenerated by putting the following input in the Technical Reports Bibliography extractor: HTML RDFC SPARQL VCARD XHTML XSLT The editor would like to thank the following Working Group members for authoring this document: This document is a product of the GRDDL Working Group. The spreadsheets example is based on work by Mark Nottingham in "Adding Semantics to Excel with Microformats and GRDDL". The version of the transformation script used in that example has a few significant changes from Mark's original. Changes since the WG decision to publish on 27 Sep include $Log: Overview.html,v $ Revision 1.1 2007/06/27 18:39:11 jean-gui primer renamed to Overview Revision 1.1 2007/06/27 18:38:18 jean-gui NOTE-grddl-primer-20070628 Revision 1.125 2007/06/27 17:30:29 hhalpin updated acknowledgements Revision 1.123 2007/06/27 17:24:48 hhalpin updated URIs of versions Revision 1.122 2007/06/27 17:21:53 hhalpin updated URIs of versions Revision 1.121 2007/06/27 17:20:51 hhalpin updated URIs of versions Revision 1.120 2007/06/27 17:18:09 jclark4 Add diagram to hotel-finding example. Revision 1.119 2007/06/27 17:10:15 hhalpin fixed dates for Note pub Revision 1.118 2007/06/27 16:58:53 jclark4 Undo the iframe scariness. Revision 1.117 2007/06/27 16:44:37 hhalpin updated with danja and chime's comments Revision 1.116 2007/06/27 14:55:25 connolly remove "microformat" from excel section; move mnot to acks section Revision 1.115 2007/06/27 14:52:37 connolly ids for authors to join across hCard and trdoc transformations Revision 1.114 2007/06/27 14:51:07 connolly add hCard markup for authors (with profile) Revision 1.113 2007/06/27 00:55:10 hhalpin moved link of hl7 tranform from test-cases to primer directory Revision 1.112 2007/06/27 00:49:19 hhalpin replaced doc29 with primer URI Revision 1.86 2007/06/26 18:56:16 jclark4 Make the inline SPARQL equivalent to the linked SPARQL in the spreadsheet section, and fix several well-formedness errors. Revision 1.85 2007/06/26 14:14:04 jclark4 Minor consistency changes to the primer and the spreadsheet for the spreadsheet example and some typo and wording changes to the primer. Revision 1.84 2007/06/24 20:04:49 hhalpin added danja's edits Revision 1.83 2007/06/22 18:53:03 bsuda fixed sparql #3 and updated primer Revision 1.82 2007/06/22 18:48:58 bsuda updated sparql queries, rdf and html and primer document to reflect the new queries Revision 1.81 2007/06/20 14:05:38 connolly uncomment embeddedRDF.png image; add hCalendar.png image back in Revision 1.80 2007/06/20 02:02:34 hhalpin minor updates to spreadsheet section, linking files Revision 1.77 2007/06/14 10:56:48 jcarroll switched to using local RDFa2RDFXML rather than td one Revision 1.76 2007/06/13 17:43:27 jclark4 Convert SPARQL results to an HTML table in the "Reusing Spreadsheets" section and fix numerous well-formedness errors. Revision 1.75 2007/06/13 17:27:47 jclark4 Add entry in the table of contents for the new "Reusing Spreadsheets" section. Revision 1.74 2007/06/13 17:15:23 connolly @@ around transition between spreadsheets and health care Revision 1.73 2007/06/13 17:11:09 connolly paste in spreadsheet example from John Clark Tue, 12 Jun 2007 16:16:36 -0400 Revision 1.72 2007/06/13 16:54:06 hhalpin updated xslt for hl7 Revision 1.71 2007/05/06 04:46:50 hhalpin hotel-data.rdf replacced by review.rdf Revision 1.70 2007/05/06 00:55:12 connolly linebreaks in the ClinicalDocument Revision 1.69 2007/05/06 00:53:02 connolly linebreaks to make the examples less wide Revision 1.68 2007/05/05 22:09:26 connolly fix pre/p markup problem, copyright unicode characters Revision 1.67 2007/05/05 20:43:24 hhalpin removed errant SPARQL query, added XFN and hReview code back in Revision 1.66 2007/05/05 20:36:11 hhalpin reverting to 1.55 plus fixes in 1.65 in Healthcare section Revision 1.55 2007/04/24 17:57:37 hhalpin added more of Chime's test case and changed some text for easier reading Revision 1.54 2007/04/11 08:21:45 hhalpin added transform library mention Revision 1.50 2007/04/11 08:18:35 hhalpin added transform library mention Revision 1.49 2007/03/21 04:35:42 hhalpin cleaned up healthcare example Revision 1.48 2007/03/14 08:11:13 hhalpin fixed rdfa transform, fixed part 2 Revision 1.45 2007/02/21 06:52:53 hhalpin added danja's comments Revision 1.43 2007/02/19 19:07:53 idavis Updated date of draft Revision 1.42 2007/02/19 19:02:21 idavis Fixed typo in SPARQL reference Revision 1.41 2007/02/19 18:55:36 idavis Addresses rreck comment, fixed typos and minor layout changes, added references in clinical data section Revision 1.40 2007/02/12 01:38:28 hhalpin added RDFa example Revision 1.39 2007/02/12 00:57:18 hhalpin added chime's test case Revision 1.37 2007/02/07 15:09:22 hhalpin updated sparql query Revision 1.36 2007/01/13 00:00:26 hhalpin edited some links Revision 1.31 2007/01/12 18:57:21 hhalpin dates fixed Revision 1.30 2007/01/12 18:55:16 hhalpin dates fixed Revision 1.29 2007/01/12 03:56:21 hhalpin edited Revision 1.28 2007/01/12 03:50:11 hhalpin minor edits Revision 1.27 2007/01/12 03:49:45 hhalpin minor edits Revision 1.25 2007/01/09 23:54:10 hhalpin fixed formatting Revision 1.22 2007/01/09 23:43:15 hhalpin using new vcard RDF Revision 1.21 2006/12/13 00:34:29 hhalpin fixing syntactic quibbles Revision 1.18 2006/10/19 07:56:45 idavis Revised references, corrected title from WD to editors draft Revision 1.17 2006/10/19 07:07:51 idavis Various minor editorial changes, spellings, grammar etc
http://www.w3.org/TR/2007/NOTE-grddl-primer-20070628/
crawl-001
refinedweb
5,769
51.89