text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Re: invoked "break" outside of a loop - From: Darren New <dnew@xxxxxxxxxx> - Date: Sun, 24 Sep 2006 16:59:56 GMT Russell Trleleaven wrote: Sorry to belabor the point but I was hoping someone would explain why "break" "uplevel break" and "namespace eval :: break" don't work like I thought they might. As you've learned, [break] is simply a return with a special code attached. Hence, when you invoke [break], it invokes a command that returns that special code, which is caught inside your proc, which leads to the error. I.e., the proc is converting the return from [break] into a different type of return, since [break] is invoked several levels down the return stack from the [for] loop. For example... proc AA {} { BB puts "got here AA" | proc BB {} { if {[catch { uplevel "set x 0" }]} { puts "caught something" } puts "got here BB" } Now, given that [set x 0] there return no error, nothing prints. Were the [set x 0] to be replaced with an error call, you would expect the catch to catch it, yes? Hence, since [break] is simply a kind of error, that error return is returned to the caller for use in [catch]. Since there is no [catch], it propagates up to the environment of the proc, where proc is about to return, and says "Hey, why are you throwing a [break] without an appropriate [catch]?" Does that help? -- Darren New / San Diego, CA, USA (PST) Just because you find out you are telepathic, don't let it go to your head. . - References: - invoked "break" outside of a loop - From: Russell Trleleaven - Re: invoked "break" outside of a loop - From: Gerald W. Lester - Re: invoked "break" outside of a loop - From: Russell Trleleaven - Prev by Date: Re: How to copy a proc - Next by Date: Re: Expect for Windows NEWBIE question 'Broken pipe while executing' cisco router telnet script - Previous by thread: Re: invoked "break" outside of a loop - Next by thread: Freewrap6.2 and Snack 2.2 - Index(es):
http://coding.derkeiler.com/Archive/Tcl/comp.lang.tcl/2006-09/msg00794.html
crawl-002
refinedweb
336
71.38
tl;dr: You probably witnessed it as well. You fill a form, you navigate somewhere else and then go back to realize all the form was cleared away. Maybe it was a search form or even a registration one. This shouldn't be the case: you can write a simple hook to preserve state in the browser history. Try it live here. Also, the code is live on CodeSandbox. I started by coding every stage on its own. Adding navigation later should be easy and not a problem, thanks to React Router v5. This made my coding experience REALLY fast. When I got to navigation, using RRv5 was kinda easy. I decided to use the browser history state to pass the page results between pages. The browser history state is a way of passing data or storing data in the history stack itself, instead of adding a new global state to the app. So navigating looked like this: import { useHistory } from "react-router-dom"; function MyComponent() { const history = useHistory(); return ( <button type="button" onClick={() => { history.push({ pathname: "/next-page", state: { firstSectionResult: THE_VALUE_I_WANT_TO_PASS } }); }} > Next step </button> ); } How does it work? Well, if you long-press the "back" button on your browser, you'd probably see your history for this specific tab. Using window.history.back() is a handy way to go back programmatically, but it's not the entire API that the browser gives us. It also gives us the power to manipulate the history stack with pushing a new item and replacing the current item. Each item on the history has a path name and state that you can manage. Most project I witnessed don't use the history state at all, despite it being a super powerful tool! I also provided a hook to read the value off the history state: import { useHistory } from "react-router-dom"; export function useFirstSectionResult(): FirstSectionResult | undefined { const history = useHistory(); const result = history.location.state?.firstSectionResult; return result; } Now I could use the history state in the next page just by calling this hook. If I got undefined, I could redirect the users back to the first section so they will complete it. I saw that this idea works, and I was excited to try pressing the browser's "back" button to immediately press "next" to see that the state management actually works. So I hit the "back button". The form that I just filled is empty. When I clicked "next" the second stage of the form worked as expected, but I was very annoyed by the fact that all the fields I filled is now completely empty. Solving the State Preservation Problem This is a problem we have in modern front-end apps which we hadn't when we were doing forms generated by some backend (like Rails or ASP) because the browsers do try to remember the values we filled in the forms! Because all the routing and rendering happens on the client side, way after the browser "rehydrates" its inputs, we lose all the state of the form. This is clearly unacceptable because it's the worst user experience there is. I discussed with the people who volunteered with me regarding the issue and consulted Google. Seems like like most solutions are using localStorage, sessionStorage and indexedDB. I decided to go with sessionStorage because both localStorage and indexedDB sounded like a long-time cache and a session-long cache sounds appealing to me. I decided to go and make a custom hook ( useSessionState) that worked like useState, only read the initial value from sessionStorage, and wrote all changes to sessionStorage as well. Then, I made all form elements controlled by specifying onChange and value. Thinking about the solution again, I didn't think it is good enough. The main problem I have with it, is that sessionStorage is consistent across different browser tabs/windows. It is basically a global cache. That means that you can't use the form in more than one tab(!). Not exactly what you expect from a web browser. Imagine you open multiple tabs and fill forms just to realize they override each other silently. Absurd! This would also happen in localStorage and indexedDB because they too work like a global cache. So how can I still make the form work across different tabs while supporting refreshes and navigations? Altering the History State Remember the browser history state we have just used to provide state when navigating to a new page? We did it by calling the History push function. What if we could change the current page's navigation state? Apparently, this is possible using the replace function, which replaces the current item in the history stack instead of pushing a new one (which is the normal behavior of navigation). We can avoid altering the pathname (or URL) part, and only alter the state, like so: history.replace({ ...history.location, state: { ...history.location.state, SOME_ITEM: SOME_VALUE } }); This can be wrapped in a hook, to make it look and feel like the useState hook: import { useHistory } from "react-router-dom"; function useHistoryState<T>(key: string, initialValue: T): [T, (t: T) => void] { const history = useHistory(); const [rawState, rawSetState] = useState<T>(() => { const value = (history.location.state as any)?.[key]; return value ?? initialValue; }); function setState(value: T) { history.replace({ ...history.location, state: { ...history.location.state, [key]: value } }); rawSetState(value); } return [rawState, setState]; } Now, when we don't have any plain useState in our form, every form input (and even complex JSON objects) will be preserved across refreshes and page navigations without further thinking. This solution isn't just for people who use React Router. The attached CodeSandbox has a native implementation you can use anywhere. Make sure to open the preview in a new window because CodeSandbox's preview pane does not preserve history state. You see, making forms didn't use to be hard. Making them work like they did while using modern front-end frameworks shouldn't be either. Implementing this simple setHistoryState makes us stop thinking about how to manage forms' state and lets us just write forms with a great experience. All thanks to the History API, which doesn't only allow us to do magnificent client-side routing, it also helps us preserve meaningful state for our pages. 🎉
https://gal.hagever.com/posts/react-forms-and-history-state
CC-MAIN-2021-49
refinedweb
1,043
54.42
I have included a LaTeX typeset version of this tutorial. Note that any source code is not included in the typeset version of this tutorial. Number of downloads: 123 I. Introduction In mathematics, economics, and computer science, matching problems deal with taking a set of elements and pairing them off such that any two pairs are disjoint. Note that elements may be unpaired. A marriage is a prime example of a matching, as no person can have multiple spouses. Some people may prefer to remain single as well. These are all acceptable options in a stable marriage. This tutorial will explore the Stable Marriage Problem, including the Gale-Shapley algorithm and results about the stable matchings it yields. II. Stable Marriage Problem In the last section, the notion of a matching was described intuitively as a pairing of a subset of elements, leaving potentially unpaired elements. This is formalized by defining a matching as an involution. Of course, a definition of an involution is first needed. Consider some set X and a function f : X -> X. Then the function f is an involution if f is its own inverse. That is, f o f(x) = x, for every x \in X. Let's consider an example of an involution. Let X = { 0, 1, 2, 3, 4, 5}, and let f : X -> X be given by f(5) = 5 and f(x) = 4 - x for every x \neq 5. Observe that for x = 5, f o f(5) = 5, which satisfies the involution property. Now suppose x = 3. So f(x) = 1, and f(1) = 3. So f \circ f(3) = 3. More generally, if x \neq 5, then f o f(x) = 4 - f(x) = 4 - (4 - x) = 4 - 4 + x = x. So the function provided is an involution. Observe as well that this involution pairs the elements 0-4 and 1-3, leaving 2 and 5 unpaired. Another example of a matching is a marriage. We have John marrying Jane, Bob marrying Betsy, and Carl and Carol remaining single. We define this formally with the set X = { John, Jane, Bob, Betsy, Carl, Carol } and f : X \to X defined by: - f(John) = Jane, and f(Jane) = John - f(Bob) = Betsy, and f(Betsy) = Bob - f(Carl) = Carl, and f(Carol) = Carol So now that a matching has been defined, the next step is to introduce the Stable Marriage Problem. We start with two types of players, given by sets X and Y with |X| = |Y|. For example, we could have sets of Firms and Workers, or Men and Women. Each member of a set can only be matched with a member of the other set or itself. That is, no two distinct elements in the set X can be matched together, nor can two distinct elements in the set Y be matched together. However, a player can be unmatched, which is equivalent to matching it with itself. Each player can only be matched with at most one other player. Finally, each player i has a strict and transitive preference relation \succi. In particular, we only care about the players with whom i would prefer to be matched with over staying single. If player i prefers to stay single than be matched with player j, we need not consider player j when constructing a stable marriage. Note: j \succi k reads that player i strictly prefers player j to player k. So we know that a stable marriage is a matching. So what exactly does it mean for a marriage to be stable? Intuitively, if there is a player of type X and a player of type Y who would prefer each other to their current mates, then the marriage is not stable. So consider the example marriage above. Suppose John and Jane are married, and Bob and Betsy are married. If Bob would prefer Jane to Betsy as his spouse, and Jane would prefer Bob to John as her spouse, then the marriage is not stable. The reason for this is that Bob and Jane would leave their respective spouses for each other, which maximizes their respective preferences. So formally, a marriage \mu : X U Y -> X U Y is stable if and only if for every pair (i, j) \in X x Y, we have j \not \succi \mu(i) or i \not \succj \mu(j) (or both). That is, there is no couple that would prefer to be with each other instead of their current spouses (which includes being single). Let's consider an example. Let X = { x1, x2, x3 } and Y = { y1, y2, y3 }. Below are each player's preferences in descending order. That is, the first element in the list is the player's most desirable spouse. The last element on the given list is the last alternative preferable to being single. So for example, x1 as shown below prefers y2 the most, then y1. However, x1 would rather stay single than be matched with y3. - x1: y2, y1 - x2: y1, y2, y3 - x3: y1, y2 - y1: x1, x3, x2 - y2: x2, x1, x3 - y3: x1, x3, x2 Consider the matching of x1-y1, x2-y2, x3-y3. Observe that y1 and y2 have their most preferred spouses. Even though x1 and x2 would prefer to swap spouses, their respective partners would not prefer to swap. So thus far, the definition of a stable marriage seems to hold. Now consider the pairing of x3-y3. Observe that x3 prefers to be single rather than matched with y3. Recall that remaining single is equivocal to being matched with oneself. So x3 would prefer to be matched with a spouse (namely, x3) that would prefer mutually prefer to be matched with x3. Thus, this marriage fails to be stable. Instead, the marriage x1-y1, x2-y2, x3 and y3 is stable. In a later section, the notion of stability will be generalized into the solution concept known as the core. The notion of the core provides a more comprehensive framework to discuss cooperative games, such as the stable marriage problem. For now, let's introduce the Gale-Shapley algorithm to solve the Stable Marriage Problem. III. Gale-Shapley Algorithm In solving the Stable Marriage Problem, we seek to design a mechanism that takes individual preferences and returns a stable matching. The mechanism design by Gale and Shapley is an algorithm which follows an intuitive proposal model in the marriage market, allowing input from both types of players. We start by exploring the algorithm, its correctness, and its time complexity. From there, we will examine results about the matchings produced by this mechanism. The Gale-Shapley algorithm is very straight-forward. It starts with the two types of players, X and Y. One of these types serves as the proposing, and the other type as the recipient type. We begin at each iteration by selecting an unmatched element xi \in X. The element xi then proposes to its potential mates in preference order until its proposal is held or it exhausts all potential mates. Note that xi will not propose to a player yj \in Y if it prefers being single to being matched with yj. The players of type Y cannot make proposals. They can only accept or reject proposals as they receive them. If a player yj \in Y is unmatched, it will hold a proposal from player xi if and only if yj prefers to be matched with xi. If yj is presently holding a proposal by another player xk \in X, then yj will accept x_{i}'s proposal if and only if xi \succj xk. If this happens, then xk is unmatched and placed back into the pool of X elements seeking a mate. Let's work through an example. Recall Suppose the elements in set X are the proposers. Consider the steps of this algorithm. - We start with element x1, who proposes to y2. As y2 prefers x1 to being single, y2 holds the proposal from x1. - Next, x2 proposes to y1, who holds x2's proposal. - Now, x3 proposes to y1, who rejects his proposal. Next x3 proposes to y2, who also rejects his proposal. Thus, x3 will remain unmatched. As there are no more unmatched X elements, the algorithm terminates and the Y elements are matched as specified by the proposals being held. So the marriage here is x1 - y2, x2 - y1, x3, y3. Observe that this is a different stable marriage than was deduced in the previous section. The stable marriage of x1 - y1, x2 - y2, x3, y1 will be returned if the Y elements proposed instead of the X elements. Let's contrast the cases in which the X elements proposed versus when the Y elements proposed. Observe that the stable matching returned by the Gale-Shapley algorithm favors the proposers. That is, when the X elements proposed, the X elements obtained their preferred mate in comparison to when the Y elements proposed. Similarly, when the Y elements proposed, they received their preferred mates in comparison to the stable marriage when the X elements proposed. In a later section, it will be proven that the Gale-Shapley algorithm favors the proposers. I now provide a Java implementation of the Gale-Shapley algorithm. Consider first the Actor class. This class is the backbone of the implementation, modeling a single player in the Stable Marriage Problem. For this implementation, we classify players as either Firms or Workers. That is, we have a set of Firms and a set of Players, with one of of these sets being the proposing set. Functionality for both making proposals and accepting proposals is included in the Actor class. Note that a different component is responsible for orchestrating how the Actors interact. package stablemarriage; /** * @author Michael Levet * @date 05/19/2015 * * This class models an Actor, either a Firm or Worker, in the * Stable Marriage problem. Each Actor has preferences regarding * Actors of the other type. That is, a Firm can prefer Workers * but a Firm cannot prefer other Firms. An Actor of one type * can propose to Actors of the other type, as well as accept or * reject proposals. */ import java.util.LinkedList; import java.util.Iterator; public class Actor { public enum Type{FIRM, WORKER}; private Type type; private String name; private LinkedList<Actor> preferences; private Actor match; /** * @param type Whether this is a Firm or Worker * @param name The name of this Actor */ public Actor(Type type, String name){ this.type = type; this.name = name; this.preferences = new LinkedList<Actor>(); this.match = null; } /** * @return Actor The matched Actor or null if unmatched. */ public Actor getMatch(){ return this.match; } /** * @param actor The Actor to insert into the preferences ranking. * Duplicates entries are not allowed. * @return boolean true iff actor was successfully inserted. * @throws IllegalArgumentException if actor.type == this.type */ public boolean insertLeastPreferredActor(Actor actor){ if(actor.type == this.type){ throw new IllegalArgumentException("The parameter type is the same " + "as this Actor. An Actor can only be matched with an Actor " + "of different type."); } if(this.preferences.contains(actor)){ return false; } return this.preferences.add(actor); } /** * @param actor The Actor to insert into the preferences ranking. * Duplicates entries are not allowed. * @param preferenceRanking The relative index in which to insert the Actor. * If preferenceRanking < preferences.size(), then * actor is inserted into the end of preferences. * @return true iff actor was successfully inserted. * @throws IllegalArgumentException if actor.type == this.type */ public boolean insertActor(Actor actor, int preferenceRanking){ if(preferenceRanking < this.preferences.size()){ return insertLeastPreferredActor(actor); } if(actor.type == this.type){ throw new IllegalArgumentException("The parameter type is the same " + "as this Actor. An Actor can only be matched with an Actor " + "of different type."); } if(this.preferences.contains(actor)){ return false; } this.preferences.add(preferenceRanking, actor); return true; } /** * @return true iff a match was made * * This method examines the Actors preferred mates in preference order, * attempting to propose to each of them. It terminates when it successfully * find a match or exhausts all viable possibilities. * */ public boolean makeProposals(){ Actor temp = null; do{ temp = this.preferences.pollFirst(); System.out.println(this.name + " proposed to " + temp); if(temp.acceptProposal(this)){ System.out.println(temp + " held the proposal from " + this); this.match = temp; return true; } System.out.println(temp + " rejected the proposal from " + this); }while(temp != null); return false; } /** * @param proposer The Actor proposing * @return true iff the proposal is accepted * * This method checks to see if proposer.type != this.type, * and then accepts proposer iff proposer is in this.preferences * and this.match isn't preferred to proposer. */ public boolean acceptProposal(Actor proposer){ if(proposer.type == this.type){ return false; } int index = this.preferences.indexOf(proposer); if(index == -1){ return false; } else if(this.match != null){ if(this.preferences.indexOf(this.match) < index){ return false; } System.out.println(this + " unmatched from " + this.match); this.match.match = null; } this.match = proposer; return true; } /** * @return true if this Actor is unmatched and has unchecked potential mates */ public boolean canMakeProposals(){ return (match == null) && preferences.size() > 0; } /** * @return The name of this Actor */ public String toString(){ return this.name; } } The component that handles Actor interaction and the logic-flow of the Gale-Shapley algorithm is the MatchMaker class. package stablemarriage; /** * @author Michael Levet * @date 05/19/2015 * * This class implements the Gale-Shapley algorithm * to solve the stable-marriage problem. */ import java.util.List; import java.util.ArrayList; public class MatchMaker { private List<Actor> firms, workers; /** * @param firms The firms * @param workers The workers */ public MatchMaker(List<Actor> firms, List<Actor> workers){ this.firms = firms; this.workers = workers; } /** * @param proposer Determines if the Firms or Workers are proposing */ public void stableMarriage(Actor.Type proposer){ if(proposer == Actor.Type.FIRM){ matchProposers(firms, workers); } else{ matchProposers(workers, firms); } } /** * @param proposers The proposing Actors * @param recipients The Actors accepting (or rejecting) proposals * * This method examines each Actor and determines if it can make a proposal. * If so, it lets the Actor make all possible proposals. It proceeds to the * next Actor once the current Actor has found a match or has exhausted all * possibilities. This method terminates once an iteration through proposers * has been made without a new match being made. */ private void matchProposers(List<Actor> proposers, List<Actor> recipients){ boolean hasMadeMatch = false; do{ hasMadeMatch = false; for(Actor actor : proposers){ if(actor.canMakeProposals()){ actor.makeProposals(); hasMadeMatch = true; } } }while(hasMadeMatch); } } Finally, consider the driver class for this implementation. This demo uses the following actors with their preferences shown below. It demonstrates both firms as the proposers, and workers as the proposers. package stablemarriage; /** * @author Michael Levet * @date 05/19/2015 * * This class demonstrates the Gale-Shapley implementation. */ import java.util.List; import java.util.ArrayList; public class StableMarriage { private static List<Actor> firms, workers; /** * @param args the command line arguments */ public static void main(String[] args) { System.out.println("Demo Firms Proposing: "); demoFirmsProposing(); System.out.println("\n\nDemo Workers Proposing: "); demoWorkersProposing(); } private static void initializeActors(){ Actor f1 = new Actor(Actor.Type.FIRM, "F1"); Actor f2 = new Actor(Actor.Type.FIRM, "F2"); Actor f3 = new Actor(Actor.Type.FIRM, "F3"); Actor f4 = new Actor(Actor.Type.FIRM, "F4"); Actor w1 = new Actor(Actor.Type.WORKER, "W1"); Actor w2 = new Actor(Actor.Type.WORKER, "W2"); Actor w3 = new Actor(Actor.Type.WORKER, "W3"); Actor w4 = new Actor(Actor.Type.WORKER, "W4"); firms = new ArrayList<Actor>(); firms.add(f1); firms.add(f2); firms.add(f3); firms.add(f4); workers = new ArrayList<Actor>(); workers.add(w1); workers.add(w2); workers.add(w3); workers.add(w4); f1.insertLeastPreferredActor(w3); f1.insertLeastPreferredActor(w1); f2.insertLeastPreferredActor(w1); f2.insertLeastPreferredActor(w3); f2.insertLeastPreferredActor(w4); f2.insertLeastPreferredActor(w2); f3.insertLeastPreferredActor(w1); f3.insertLeastPreferredActor(w3); f4.insertLeastPreferredActor(w3); f4.insertLeastPreferredActor(w1); f4.insertLeastPreferredActor(w4); f4.insertLeastPreferredActor(w2); w1.insertLeastPreferredActor(f1); w1.insertLeastPreferredActor(f3); w2.insertLeastPreferredActor(f1); w2.insertLeastPreferredActor(f4); w2.insertLeastPreferredActor(f2); w2.insertLeastPreferredActor(f3); w3.insertLeastPreferredActor(f3); w3.insertLeastPreferredActor(f1); w3.insertLeastPreferredActor(f4); w4.insertLeastPreferredActor(f4); w4.insertLeastPreferredActor(f1); w4.insertLeastPreferredActor(f3); } public static void demoFirmsProposing(){ initializeActors(); MatchMaker yenta = new MatchMaker(firms, workers); yenta.stableMarriage(Actor.Type.FIRM); for(Actor actor : firms){ System.out.println(actor + " Match : " + actor.getMatch()); } for(Actor actor: workers){ System.out.println(actor + " Match: " + actor.getMatch()); } } public static void demoWorkersProposing(){ initializeActors(); MatchMaker yenta = new MatchMaker(firms, workers); yenta.stableMarriage(Actor.Type.WORKER); for(Actor actor : firms){ System.out.println(actor + " Match : " + actor.getMatch()); } for(Actor actor: workers){ System.out.println(actor + " Match: " + actor.getMatch()); } } } Now that the algorithm is clear, the next goal is to prove and understand its correctness. In order for the Gale-Shapley algorithm to be a valid mechanism, it must terminate and produce a stable marriage for any valid set of preference profiles. Theorem 1: The Gale-Shapley algorithm terminates and produces a stable matching. Claim 1, Claim 2, and Claim 3 below together imply Theorem 1. Claim 1: The Gale-Shapley Algorithm correctly terminates. Proof: This proof is by extremality. Each proposer can prefer at most all acceptors to being single. Suppose this is the case, and a single proposer receives his worst mate. Then the remaining n-1 proposers must make at most n-1 proposals each. By the algorithm, each proposer will solicit a potential mate at most once. Therefore, at most n + (n-1)2 = n2 - n + 1 proposals will be executed. If no unmatched proposer is eligible to solicit a mate, then the algorithm will terminate. And so, the algorithm must terminate after at most n2 - n + 1 proposals. QED. Corollary: The Gale-Shapley algorithm executes in O(n2) time. Proof: By Claim 1, we have at most O(n2) proposals necessary to construct the matching. Each proposal takes O(1) time, as does selecting the next available proposer to match. It follows that the algorithm takes O(n2) time. QED. Claim 2: The Gale-Shapley algorithm produces a matching. Proof: It suffices to show that no player is matched with more than one mate. Observe that the proposers solicit each potential mate in preference order. By the algorithm, a proposer stops soliciting when a potential mate holds his proposal. It follows that no proposer will be matched with multiple mates. Similarly, each acceptor can either accept or reject a proposal. If the acceptor is presently single, then the definition of a matching will remain satisfied either by accepting or rejecting the proposal. If the acceptor is already holding a proposal, then the acceptor can hold the new proposal only by unmatching herself from the current suitor. It follows that no acceptor will be matched with more than one suitor. It follows that the Gale-Shapley algorithm produces a matching. QED. Claim 3: The matching produced by the Gale-Shapley algorithm is stable. Proof: The proof is by contradiction. Without loss of generality, let X be the set of proposers. Let \mu : X U Y \to X U Y be the matching returned by the Gale-Shapley algorithm. Suppose \mu is not stable. Then there exists a pair (i, j) \in X \times Y such that j \succ^{i} \mu(i) and i \succ^{j} \mu(j). By the algorithm, player i would have proposed to player j before player \mu(i) since j \succ^{i} \mu(i). If player j was single, then player j would have held player i's proposal. If instead player j was not single and (without loss of generality) holding \mu(j), then player j would have instead opted to hold player i's proposal. However, this contradicts the fact that the algorithm matched player i with player \mu(i). It follows that \mu must be stable. QED. IV. Stable Matchings and the Core Recall from the end of section II that the Core was briefly mentioned. It is the standard solution concept in cooperative games. Many important results regarding the Gale-Shapley algorithm and the Stable Marriage Problem are phrased in terms of the Core. It is also important to understand how the Stable Marriage Problem fits in with the existing and more general theory of cooperative games. For this reason, it is important to introduce briefly the notion of the Core. Recall from basic notions of game theory that the Nash equilibrium is the standard solution concept. A set of strategies constitutes a Nash equilibrium if no player can unilaterally deviate and improve his or her outcome. The Core generalizes this notion of the Nash equilibrium to coalitions. Intuitively, an allocation is in the Core if no subset of players can deviate, possibly by cooperation, and improve their outcomes. Let's start with the framework. In a cooperative game, each player i \in {1, ..., n} has an initial endowment wi from the set of commodities. Note that a given player may have an initial endowment of nothing. Each player may belong to zero or more potential coalitions, where a coalition is a set S \subset {1, ..., n}. The coalition where S = { 1, ..., n } is referred to as the grand coalition. Every coalition has a set of available actions, and each player i has a preference relation \succeqi over all the coalitions and actions available to said player. In the stable marriage problem, each player i's initial endowment is wi = i, as each player is initially single or matched with itself. The coalitions come into play when we ask the question- when does an allocation not make sense? The answer is pretty clear- when a coalition of players can collaborate and improve their outcomes. If a coalition can improve upon the given allocation, the coalition is said to block the allocation. This notion of a blocking coalition gets to the heart of the core. An allocation is said to be in the core if no blocking coalition exists. Let's formalize these notions some. Let x = (x1, x2, ..., xn) be an allocation. The allocation x is said to be dominated by some other feasible allocation y = (y1, y2, ..., yn) if there exists a coalition S such that for every s \in S, ys \succeqs xs; and for some j \in S, yj \succj xj. That is, all members in the coalition S are at least indifferent to the allocation y in comparison to the allocation x; and at least one member of S strictly prefers y to x. The coalition S is said to then block x. The above definition of the core is actually pretty formal. The core contains the set of allocations that are not dominated; or equivocally, the set of allocations for which no blocking coalition exists. So how does the notion of the core relate to the Stable Marriage Problem? As it turns out, the core is exactly the set of stable matchings. This fact will be proven later, but let's first start with an example to develop some intuition. Recall the example from section II with the sets X = { x1, x2, x3 } and Y = { y1, y2, y3 }, and the preferences given below. - x1: y2, y1 - x2: y1, y2, y3 - x3: y1, y2 - y1: x1, x3, x2 - y2: x2, x1, x3 - y3: x1, x3, x2 In that example, we first examined the matching \mu given by x1-y1, x2-y2, x3-y3. It was shown above that \mu was not stable, as x3 would prefer to be matched with itself. It was unwieldy to apply the definition of a stable matching, though, in treating x3 as its own spouse. Instead, we show that \mu is not in the core of this game. The argument is conceptually the same, but using a coalition makes things much cleaner. In order to show that \mu is not in the core of this game, it suffices to construct a blocking coalition. Consider the coalition { x3 }. The only other feasible allocation for x3 is to retain its initial endowment, which means that x3 will instead opt to remain single. Observe that x3 prefers to remain single than to be paired with y3. Thus, \mu is not in the core, as a blocking coalition exists. It follows that \mu is not stable. Now consider the matching \tau given by x1-y1, x2-y2, x3, y3. Recall from the example in section II that this matching is stable. Let's now prove that this matching is in the core. Observe that x1, x2, y1 and y2 all prefer their mates to being single, so these players will not form a one-person coalition. As x3 and y3 are already single, they will not form a blocking coalition by themselves. Now consider two-person coalitions. As y1 and y2 have their top choices, they will not participate in a blocking coalition. So x1, x2 and x3 can only participate in a blocking coalition with y3. In all instances, there is no incentive for x1, x2 or x3 to do so. It follows that there are no blocking coalitions, so \tau is in the core. This brings us to proving our result: Theorem: Let X, Y be the sets of players satisfying the assumptions of the stable marriage problem. A matching \mu is in the core if and only if each matched player prefers his or her mate to being single, and there does not exist a pair (i, j) \in X \times Y such that j \succi \mu(i) and i \succj \mu(j). Proof: Let \mu be a matching. Suppose first that \mu is in the core. By definition of \mu belonging to the core, there is no blocking coalition for \mu. Suppose to the contrary that \mu is not stable. Then either a matched player would prefer to be single, or there exists an (i, j) \in X \times Y such that j \succi \mu(i) and i \succj \mu(j). If a matched player would prefer to be single, then said player could form a blocking coalition by himself, contradicting the assumption that \mu is in the core. By similar argument, if there was an (i, j) \in X \times Y such that j \succi \mu(i) and i \succj \mu(j), then {i, j} would constitute a blocking coalition, again contradicting the assumption that \mu is in the core. It follows that \mu must be a stable matching, and so each element of the core is a stable matching. Conversely, suppose that \mu is stable. It will be shown that \mu is also in the core. As \mu is stable, no player with a mate prefers to be single. Thus, no one-player blocking coalitions exist. Similarly, since \mu is stable, there does not exist a pair (i, j) \in X \times Y such that j \succi \mu(i) and i \succj \mu(j). It follows that there can be no blocking coalition with more than one player. Thus, no blocking coalitions exist and \mu is in the core. Thus, the set of stable matchings is a subset of the core. It follows that the core is exactly the set of stable matchings. QED. V. Results on Gale-Shapley Algorithm The purpose of this section is to analyze the Gale-Shapley algorithm to analyze how well it performs as a mechanism. In other words, this section addresses the economic issues associated with the Gale-Shapley algorithm instead of the computer science issues. Two primary questions will be addressed. The first question asks how well the players perform under the Gale-Shapley algorithm. The second question asks if there is room to falsely report one's preferences. In other words, does the Gale-Shapley algorithm incentivize honesty? If reporting one's preferences honestly is a weakly dominant strategy for all players, then the mechanism is called strategy proof. We will examine whether or not the Gale-Shapley algorithm is strategy proof. Recall the example from section III, where an example of the Gale-Shapley algorithm was provided. We had When the X elements were the proposers, we had the matching x1-y2, x2-y1, x3, y3. Observe that the X elements each obtain their best matches under the Gale-Shapley algorithm, while the Y elements are not so fortunate. What happens instead if the Y elements were the proposers? Under the Gale-Shapley algorithm, the stable matching would be x1-y1, x2-y2, x3, y3. Observe that this favors the Y elements, but not the X elements. In fact, for any preference profile, the Gale-Shapley algorithm provides the proposers with their best core allocation and the acceptors with their worst core allocation. These statements will be formally proven. Theorem 2: For any preference profile and any ordering of the proposers, the Gale-Shapley algorithm returns the same stable matching \mu. In this stable matching, each proposer has his best possible mate amongst all stable matchings. Proof: This theorem will be proven by contradiction. Let \sigma \in SX, where SX is the Symmetry group or group of permutations of the proposing set X. Let \mu be the matching returned by the Gale-Shapley algorithm when evaluating proposers in the order x\sigma(1), x\sigma(2), ..., x\sigma(n)}. Now let \mu' be a stable matching, and let xi \in X such that \mu'(xi) \succi \mu(xi). By the Gale-Shapley algorithm, xi proposed to \mu'(xi), who in turn rejected xi. Suppose that, without loss of generality, this is the first instance of an acceptor rejecting a stable partner. As \mu' is a stable matching, \mu'(xi) prefers to be matched with xi over being single. So \mu'(xi) must have been already matched with another partner, xj \in X. Since no stable partner has been previously rejected by any acceptor, it follows that xj can have no stable partner better than \mu'(xi). So \mu'(xi) \succj \mu'(xj). Thus, { xj, \mu'(xi) \} form a coalition blocking \mu', contradicting the stability of \mu'. As the ordering of the proposers was arbitrary, it follows that for every \sigma \in SX, the same stable matching \mu is produced by the Gale-Shapley algorithm, and that \mu is the best stable matching for the proposers. QED. Theorem 2 yields an important corollary, that the proposers have no incentive to manipulate the outcome of the Gale-Shapley algorithm. Corollary: It is a weakly dominant strategy for proposers to honestly reveal their preferences during the execution of the Gale-Shapley algorithm. Proof: From Theorem 2, the stable matching \mu returned by the Gale-Shapley algorithm is the best possible core allocation for the proposers. Therefore, no proposer can misrepresent his preferences and improve his outcome. QED. Next, it will be shown that the acceptors receive their worst core allocation under the Gale-Shapley algorithm. To prove this, a more general result will be shown. Theorem 3: Let \mu be a proposer-optimal stable matching. Then \mu grants each acceptor her worst possible mate amongst all core allocations. Proof: This theorem will be proven by contradiction. Suppose there there exists an acceptor that does not have her worst choice in \mu. Suppose there exists a second stable matching \mu' in which an acceptor prefers her mate in \mu to her mate in \mu'. Let yi in Y such that \mu(yi) \succ^{i} \mu'(yi). As \mu is proposer-optimal, \{yi, \mu(yi)\} blocks \mu', contradicting the stability of \mu'. QED. So we have some powerful results. The Gale-Shapley algorithm correctly produces a stable marriage for any preference profile; and in fact, it produces the same stable marriage for any initial ordering of the proposers. This mechanism also favors the proposers to the point where it is in their best interests to honestly reveal their preferences, while the acceptors don't fare as well. The last important result deals with whether or not the Gale-Shapley algorithm is a strategy-proof mechanism. That is, can some players misrepresent their preferences to improve their outcomes? The answer is yes. In fact, there is a stronger result- no incentive compatible, strategy-proof mechanism exists for the stable marriage problem. That is, no mechanism exists where every player benefits from honestly revealing their preferences. The proof is actually quite simple. Theorem 4: No incentive compatible, strategy-proof mechanism exists for the stable marriage problem. Proof: A counter-example suffices as proof. Consider sets X = { x1, x2} and Y = {y1, y2 } with preferences: - x1: y1, y2 - x2: y2, y1 - y1: x2, x1 - y2: x1, x2 The only two stable matchings are \mu given by x1-y1, x2-y2, and \tau given by x1-y2, x2-y1. Suppose that players of type X are proposers. Then player y1 can misrepresent her preferences by indicating she prefers to be single rather than mated with player x2. This will induce the matching \tau with \mu believed not to be stable. Similarly, if the players of type Y are proposing, player x1 can misrepresent his preferences, indicating he will only pair with y1. This will induce the matching \mu, the only stable marriage believed to be stable. Thus, no incentive compatible, strategy-proof mechanism exists for this instance of the stable marriage problem. It follows that no incentive compatible, strategy proof mechanism exists for the general problem. QED.
http://www.dreamincode.net/forums/topic/376327-algorithmic-game-theory-stable-marriage-problem/
CC-MAIN-2017-43
refinedweb
5,494
57.57
Importing modules in Python can be done easily.. Importing modules in python: Importing a module Use the import statement: import random print(random.randint(1, 10)) 4. You can import a module and assign it to a different name: import random as rn print(rn.randint(1, 10)) 4 If your python file main.py is in the same folder as custom.py. You can import it like this: import custom It is also possible to import a function from a module: from math import sin sin(1) 0.8414709848078965 To import specific functions deeper down into a module, the dot operator may be used only on the left side of the import keyword: from urllib.request import urlopen In python, we have two ways to call function from top level. One is import and another is from. We should use import when we have a possibility of name collision. Suppose we have hello.py file and world.py files having same function named function. Then import statement will work good. from hello import function from world import function function() #world's function will be invoked. Not hello's In general import will provide you a namespace. import hello import world hello.function() # exclusively hello's function will be invoked world.function() # exclusively world's function will be invoked But if you are sure enough, in your whole project there is no way having same function name you should use from statement Multiple imports can be made on the same line: Multiple modules import time, sockets, random Multiple functions from math import sin, cos, tan Multiple constants from math import pi, e print(pi) 3.141592653589793 print(cos(45)) 0.5253219888177297 print(time.time()) 1482807222.7240417 The keywords and syntax shown above can also be used in combinations: from urllib.request import urlopen as geturl, pathname2url as path2url, getproxies from math import factorial as fact, gamma, atan as arctan import random.randint, time, sys print(time.time()) 1482807222.7240417 print(arctan(60)) 1.554131203080956 filepath = "/dogs/jumping poodle (december).png" print(path2url(filepath)) /dogs/jumping%20poodle%20%28december%29.png Importing modules in python: The all special variable Modules can have a special variable named all to restrict what variables are imported when using from mymodule import *. Given the following module: mymodule.py all = ['imported_by_star'] imported_by_star = 42 not_imported_by_star = 21 Only imported_by_star is imported when using from mymodule import *: from mymodule import * imported_by_star 42 not_imported_by_star Traceback (most recent call last): File "", line 1, in NameError: name 'not_imported_by_star' is not defined However, not_imported_by_star can be imported explicitly: from mymodule import not_imported_by_star not_imported_by_star 21 Importing modules in python: Import modules from an arbitrary filesystem location If. Importing modules in python: Importing all names from a module from module_name import * for example: from math import * sqrt(2) # instead of math.sqrt(2) ceil(2.7) # instead of math.ceil(2.7) This will import all names defined in the math module into the global namespace, other than names that begin with an underscore (which indicates that the writer feels that it is for internal use only). Warning: If a function with the same name was already defined or imported, it will be overwritten. Almost always importing only specific names from math import sqrt, ceil is the recommended way: def sqrt(num): print("I don't know what's the square root of {}.".format(num)) sqrt(4) Output: I don’t know what’s the square root of 4. from math import * sqrt(4) Output: 2.0 Starred imports are only allowed at the module level. Attempts to perform them in class or function definitions result in a SyntaxError. def f(): from math import * and class A: from math import * both fail with: SyntaxError: import * only allowed at module level Programmatic importing Python 2.x Version ≥ 2.7 To import a module through a function call, use the importlib module (included in Python starting in version 2.7): import importlib random = importlib.import_module("random") The importlib.import_module() function will also import the submodule of a package directly: collections_abc = importlib.import_module("collections.abc") For older versions of Python, use the imp module. Python 2.x Version ≤ 2.7 Use the functions imp.find_module and imp.load_module to perform a programmatic import. Taken from standard library documentation import imp, sys def import_module(name): fp, pathname, description = imp.find_module(name) try: return imp.load_module(name, fp, pathname, description) finally: if fp: fp.close() Do NOT use import() to programmatically import modules! There are subtle details involving sys.modules, the fromlist argument, etc. that are easy to overlook which importlib.import_module() handles for you. PEP8 rules for Imports Some. Importing modules in python: Importing specific names from a module Instead) Importing submodules from module.submodule import function This imports function from module.submodule. Importing modules in python: Re-importing a module When using the interactive interpreter, you might want to reload a module. This can be useful if you’re editing a module and want to import the newest version, or if you’ve monkey-patched an element of an existing module and want to revert your changes. Note that you can’t just import the module again to revert: import math math.pi = 3 print(math.pi) # 3 import math print(math.pi) # 3 This is because the interpreter registers every module you import. And when you try to reimport a module, the interpreter sees it in the register and does nothing. So the hard way to reimport is to use import after removing the corresponding item from the register: print(math.pi) # 3 import sys if 'math' in sys.modules: # Is the mathmodule in the register? del sys.modules['math'] # If so, remove it. import math print(math.pi) # 3.141592653589793 But there is more a straightforward and simple way. Python 2 Use the reload function: Python 2.x Version ≥ 2.3 import math math.pi = 3 print(math.pi) # 3 reload(math) print(math.pi) # 3.141592653589793 Python 3 The reload function has moved to importlib: Python 3.x Version ≥ 3.0 import math math.pi = 3 print(math.pi) # 3 from importlib import reload reload(math) print(math.pi) # 3.141592653589793 import() function The import() function can be used to import modules where the name is only known at runtime if user_input == "os": os = import("os") equivalent to import os This function can also be used to specify the file path to a module mod = import(r"C:/path/to/file/anywhere/on/computer/module.py")
https://codingcompiler.com/importing-modules/
CC-MAIN-2022-27
refinedweb
1,081
51.75
HDFS is now an Apache Hadoop subproject. An HDFS instance contains a vast amount of servers and each store a part of file system. A typical file size in HDFS would be in gigabytes or terabytes in size hence applications will have large data sets. A file once created need not be changed ie it works with write once read many access model. An HDFS cluster consists of a master server (namenode) that manages the file system namespace and controls the access for the files. And other nodes in the cluster servers as datanodes which handles the storage attached to the nodes and also responsible for block creation/deletion/replication as instructed from namenodes. HDFS is coded in Java so any nodes that supports Java can run nameNode or dataNode applications. This tutorial gives you a Hadoop HDFS command cheat sheet. This will come very handy when you are working with these commands on Hadoop Distributed File System). Earlier, hadoop fs was used in the commands, now its deprecated, so we use hdfs dfs. All Hadoop commands are invoked by the bin/hadoop script. This cheatsheet contains multiple commands, I would say almost all the commands which are often used by a Hadoop developer as well as administrator. It is pretty comprehensive, I have also shown all the options which can be used for the same command. In any case, while running a command you get an error, do not panic and just check the syntax of your command, there might be a command syntax issue or may be issue with the source or destination you mentioned. We have grouped commands in below categories: 1) List Files 2) Read/Write Files 3) Upload/Download Files 4) File Management 5) Ownership and Validation 6) File-system 7) Administration You can download pdf version of hadoop hdfs command cheat sheet or printable A4 image file from here. Conclusion Keep this A4 size cheatsheet on your desk printed, I am sure you will learn them quickly and will be a Hadoop expert very soon. Please keep us posted if you need us to add more commands. The commands are categorized into 7 sections according to its usage. 2 Comments... add one Hello, I think the line "hdfs -get /hadoop/*-txt /home/ubuntu Copies all the files matching the pattern from local file system to HDFS" is incorrect as the transfer is in the other direction and it should read " Copies all the files matching the pattern from HDFS to local file system". That's correct. Would change it. Thanks.
https://linoxide.com/hadoop-commands-cheat-sheet/
CC-MAIN-2021-43
refinedweb
426
67.18
From: Brian McNamara (lorgon_at_[hidden]) Date: 2003-10-20 05:39:49 On Sun, Oct 19, 2003 at 05:27:24PM -0700, Eric Friedman wrote: > This is something I had been working on about two months ago but didn't > post to the sandbox until recently. This looks potentially cool, but I don't quite understand it. > The basic idea is this: > > switch_( [variant] ) > |= case_< [pattern1] >(...) > |= case_< [pattern2] >(...) > ::: > |= default_(...) // optional catch-all > ; > > The switch_ will fail to compile if not every case is handled. In terms Given the general form of patterns you describe below, I can't possibly imagine how you determine if every case is handled, but for now I'll take your word for it. :) > of handling, the case_ constructors take typical function objects, > though the switch_ ignores any return values. Why ignore return values? If all cases had the same return type (or returns types convertible to one fixed type), could you make the whole thing an expression? Like int x = switch_( [variant] ) // or maybe switch_<int>( [variant] ) |= case_< [pattern1] >(...) |= case_< [pattern2] >(...) ::: |= default_(...) // optional catch-all ; (The alternate syntax in the comment suggests that you pass the intended return type of the whole expression as an argument to switch_, rather than try to infer it from all the case arms.) > An example usage is the following (though I have again left out the > function objects that need to be passed to the case_ constructors): > > using namespace boost::type_switch; > > variant< > int > , pair<int,double> > , pair<int,int> > , pair<double,int> > > var; > > switch_(var) > |= case_< pair<_1,_1> >(...) // matches pair<int,int> > |= case_< pair<int,_> >(...) // matches pair<int,*> > |= case_< pair<_,_> >(...) // matches pair<*,*> > |= case_< int >(...) > ; So to make sure I understand, inside the ellipsis here: |= case_< pair<_1,_1> >(...) // matches pair<int,int> I'd put a function object that knows how to accept pairs and do something? Like "pp" in struct PairPrinter { template <class T, class U> void operator()( pair<T,U> p ) const { cout << p.first << p.second << endl; } } pp; that code? > The pattern matching is implemented in terms of lambda_match, which I > have also added to the sandbox (to boost/mpl). Notably, lambda_match > leverages the MPL lambda workarounds for deficient compilers, extending > the applicability of type_switch somewhat. Does the pattern matching only know about a fixed set of types (like pair)? Or does it work on any template? -- -Brian McNamara (lorgon_at_[hidden]) Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/10/54794.php
CC-MAIN-2022-40
refinedweb
421
65.42
Tutorial 44: Text Message Greeter This tutorial brings together what the past two tutorials were working up: turning your Raspberry Pi into a text messaging greeter who welcomes you home from work! In this tutorial I show you how to scan your LAN for your cellphone and then send a message to you via a text welcoming you home. I know this isn't the biggest or most interesting project, but the concepts learned can be applied to other projects you may have in mind where you want a reaction based on whether you're home, and the ability to send SMS out from your Pi to your phone. Enjoy! DIFFICULTY EASY LINUX UNDERSTANDING NONE PYTHON PROGRAMMING SOME ABOUT 0MINUTES You can copy / paste the code below if you’re having issues with typos or want a shortcut. However I recommend that you follow along in the tutorial to understand what is going on! from twilio.rest import Client import os from time import sleep from time import time account_sid ="XXXXXXXXX" # Put your own Twilio account SID here. auth_token ="XXXXXXXXX" # Put your own Twilio auth token here. client = Client(account_sid, auth_token) phoneIP = "10.0.1.25" # Put your own phone's IP address here. zanStatus = 0 # start off not at home timeElapsed = 0 # count in seconds how much time has passed startTime = time() currentTime = time() checkFrequency = 10 lonelyTime = 120 # how long I need to be gone to get a reaction def sendText(): print("execute the sending of a text here") message = client.api.account.messages.create( to="+############", # Put your own cellphone number here. from_="+#############", # Put your own Twilio number here. body="Welcome home Zan!") while True: isZanHome = os.system("ping -c 1 " + phoneIP) if isZanHome == 0: print("Zan is home!") startTime = time() if zanStatus == 0: zanStatus = 1 print("We should send a text!") sendText() else: print("Zan is not home") if timeElapsed > lonelyTime and zanStatus == 1: print("Zan is really gone.") zanStatus = 0 startTime = time() currentTime = time() timeElapsed = currentTime - startTime print(timeElapsed) sleep(checkFrequency) To be able to accomplish this tutorial you need to have a Twilio account setup. You can setup your account by following the link below:
http://thezanshow.com/electronics-tutorials/raspberry-pi/tutorial-44
CC-MAIN-2019-39
refinedweb
359
61.97
IRC log of swxg on 2009-05-27 Timestamps are in UTC. 12:58:27 [RRSAgent] RRSAgent has joined #swxg 12:58:27 [RRSAgent] logging to 12:58:29 [trackbot] RRSAgent, make logs world 12:58:30 [Zakim] Zakim has joined #swxg 12:58:32 [trackbot] Zakim, this will be 7994 12:58:32 [Zakim] ok, trackbot; I see INC_SWXG()9:00AM scheduled to start in 2 minutes 12:58:35 [trackbot] Meeting: Social Web Incubator Group Teleconference 12:58:36 [trackbot] Date: 27 May 2009 12:59:07 [tinkster] zakim, mute me 12:59:07 [Zakim] sorry, tinkster, I don't know what conference this is 12:59:31 [rreck] 7994? 12:59:36 [jsalvachua] jsalvachua has joined #swxg 12:59:42 [rreck] libby: 7994 i think 12:59:48 [dom] zakim, code? 12:59:48 [Zakim] the conference code is 7994 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), dom 12:59:59 [libby] thanks rreck 13:00:00 [jsalvachua] Hello to all 13:00:06 [tinkster] zakim, who's here? 13:00:06 [Zakim] INC_SWXG()9:00AM has not yet started, tinkster 13:00:07 [Zakim] On IRC I see jsalvachua, Zakim, RRSAgent, cperey, rreck, libby, adam, caribou, hhalpin, tpa, petef, melvster, mischat, tinkster, karl, trackbot, dom, AlexPassant 13:00:15 [tinkster] zakim, mute me 13:00:15 [Zakim] sorry, tinkster, I don't know what conference this is 13:00:17 [hhalpin] 7996 libby 13:00:29 [tinkster] zakim, this is SWXG 13:00:29 [Zakim] ok, tinkster; that matches INC_SWXG()9:00AM 13:00:30 [hhalpin] no, 7994. 13:00:39 [Zakim] +Hakan 13:00:40 [tinkster] zakim, mute me 13:00:41 [Zakim] tinkster should now be muted 13:00:43 [Zakim] +[IPcaller] 13:00:55 [hhalpin] Zakim, IPcaller is hhalpin 13:00:55 [Zakim] +hhalpin; got it 13:01:09 [Zakim] + +7.942.aacc 13:01:26 [Zakim] +??P20 13:01:35 [Zakim] +Carine 13:01:41 [Zakim] +jsalvachua 13:01:45 [tinkster] How does this IPcaller thing work then? Does Zakim have a SIP interface? 13:01:48 [jsalvachua] zakim, mute me 13:01:50 [hajons] hajons has joined #swxg 13:01:53 [Zakim] jsalvachua should now be muted 13:01:59 [Zakim] +??P24 13:02:09 [petef] zakim, who is talking? 13:02:11 [Zakim] + +4222aadd 13:02:15 [hhalpin] Zakim, who is making noise? 13:02:20 [Zakim] petef, listening for 10 seconds I heard sound from the following: 11 (61%), ??P20 (4%), ??P24 (85%) 13:02:31 [Zakim] hhalpin, listening for 10 seconds I heard sound from the following: 11 (10%), ??P24 (24%) 13:02:35 [petef] zakim p24 is me 13:02:58 [Zakim] + +0798919aaee 13:03:10 [mischat] Zakim: +0798919aaee is mischat 13:03:19 [mischat] zakim, +0798919aaee is mischat 13:03:19 [Zakim] +mischat; got it 13:03:20 [petef] zakim, +4222aadd is me 13:03:21 [adam] yeah 13:03:22 [Zakim] +petef; got it 13:03:29 [libby] zakim, +7.942.aacc is me (I think) 13:03:29 [Zakim] I don't understand '+7.942.aacc is me (I think)', libby 13:03:31 [mischat] zakim, mute me 13:03:31 [Zakim] mischat should now be muted 13:03:34 [libby] zakim, +7.942.aacc is me 13:03:34 [Zakim] +libby; got it 13:04:23 [petef] zakim, mute me 13:04:23 [Zakim] petef should now be muted 13:04:24 [mischat] zakim, who is making noise 13:04:24 [Zakim] I don't understand 'who is making noise', mischat 13:04:38 [tinkster] That's not our anthem though. 13:04:52 [mischat] zakim, who is making noise? 13:04:54 [Zakim] +karl 13:05:02 [Zakim] mischat, listening for 10 seconds I heard sound from the following: hhalpin (29%), ??P24 (31%) 13:05:12 [claudio] claudio has joined #swxg 13:05:13 [Zakim] + +49.173.515.aaff 13:05:15 [karl] zakim, mute karl 13:05:15 [Zakim] karl should now be muted 13:05:16 [hhalpin] Zakim, read agenda from 13:05:16 [Zakim] working on it, hhalpin 13:05:19 [Zakim] done reading agenda, hhalpin 13:06:02 [hhalpin] OK, we should probably begin now 13:06:03 [mischat] the sound is terrible 13:06:03 [rreck] i nominated chime to scribe 13:06:24 [hhalpin] Zakim, whose making noise? 13:06:24 [Zakim] I don't understand your question, hhalpin. 13:06:28 [petef] sorry that's me 13:06:29 [hhalpin] Zakim, who is making noise? 13:06:32 [mischat] zakim, who's making noise? 13:06:40 [Zakim] hhalpin, listening for 10 seconds I heard sound from the following: hhalpin (67%), ??P24 (9%) 13:06:47 [rreck] me too 13:06:50 [Zakim] mischat, listening for 10 seconds I could not identify any sounds 13:06:59 [hhalpin] is everyone having trouble just *hearing* other people. 13:07:03 [tinkster] All I can hear is a strange humming noise. 13:07:04 [hhalpin] Perhaps we should do a quick go around. 13:07:10 [rreck] it probably worthwile to start 5 minutes earlier with this many participants 13:07:30 [hhalpin] Zakim, whose on the phone? 13:07:30 [Zakim] I don't understand your question, hhalpin. 13:07:33 [mischat] +1 humming noise 13:07:36 [hhalpin] Zakim, who is one the phone? 13:07:36 [Zakim] I don't understand your question, hhalpin. 13:07:42 [rreck] humming 13:07:45 [mischat] hello ! 13:07:46 [hhalpin] Maybe it's me 13:07:46 [tinkster] zakim, who's here? 13:07:46 [Zakim] On the phone I see ??P5, AdamB, tinkster (muted), Hakan, hhalpin, libby, ??P20, Carine, jsalvachua (muted), ??P24 (muted), petef (muted), mischat (muted), karl (muted), 13:07:49 [Zakim] ... +49.173.515.aaff 13:07:50 [Zakim] On IRC I see claudio, hajons, jsalvachua, Zakim, RRSAgent, cperey, rreck, libby, adam, caribou, hhalpin, tpa, petef, melvster, mischat, tinkster, karl, trackbot, dom, AlexPassant 13:07:51 [hhalpin] Zakim, mute me 13:07:51 [Zakim] hhalpin should now be muted 13:07:55 [rreck] whew 13:07:55 [hhalpin] Is the humming noise gone? 13:07:58 [rreck] yes 13:07:59 [mischat] yes 13:08:00 [jsalvachua] yes 13:08:00 [tinkster] humming gone 13:08:05 [petef] zakim, unmute me 13:08:05 [Zakim] petef should no longer be muted 13:08:07 [hhalpin] Ah, perhaps it is some sort of weird Skype bug on my end. 13:08:08 [Zakim] -??P20 13:08:20 [rreck] call in again perhaps 13:08:30 [hhalpin] OK, I'll just mute in and out 13:08:33 [jsalvachua] is environement noise captured by microphone 13:08:38 [mischat] ah, you need headphoen if you are using skype you get feedback from laptop mic -> laptop speakers 13:08:43 [Zakim] +??P20 13:08:46 [petef] I don't seem to be able to unmute me 13:08:51 [Zakim] +cperey 13:08:55 [hhalpin] Zakim, unmute petef 13:08:55 [Zakim] petef was not muted, hhalpin 13:09:09 [hhalpin] OK, let's begin the meeting. 13:09:18 [mischat] hello 13:09:19 [hhalpin] but we need a scribe... 13:09:22 [petef] I think perhaps I am P24? 13:09:22 [hhalpin] do we have any volunteers? 13:09:23 [mischat] i will scribe 13:09:27 [hhalpin] Excellent mischat. 13:09:31 [cperey] zakim mute me 13:09:38 [adam] +1 mischat 13:09:43 [hhalpin] Convene SWXG WG meeting of 2009-05-27T13:00-15:00Z 13:09:44 [tinkster] ScribeNick: mischat 13:09:44 [petef] couldn't make out a word of that whoevere spoke 13:09:50 [tinkster] Scribe: Mischa 13:09:52 [Zakim] -cperey 13:10:00 [hhalpin] Zakim, unmute me 13:10:00 [Zakim] hhalpin should no longer be muted 13:10:02 [mischat] someone say hello to him 13:10:06 [rreck] i said hello 13:10:07 [hhalpin] Zakim, mute me 13:10:07 [Zakim] hhalpin should now be muted 13:10:19 [hhalpin] Zakim, unmute me 13:10:19 [Zakim] hhalpin should no longer be muted 13:10:27 [Zakim] +cperey 13:10:30 [hhalpin] Zakim, mute me 13:10:30 [Zakim] hhalpin should now be muted 13:10:43 [petef] zakim, unmute p24 13:10:43 [Zakim] sorry, petef, I do not know which phone connection belongs to p24 13:10:44 [hhalpin] PROPOSED: to approve SWXG WG Weekly -- 20 May 2009 as a true record 13:10:49 [hhalpin] Zakim, unmute me 13:10:49 [Zakim] hhalpin should no longer be muted 13:10:55 [hhalpin] Zakim, mute me 13:10:55 [Zakim] hhalpin should now be muted 13:10:56 [petef] yes 13:10:59 [hajons] yes 13:11:00 [jsalvachua] yes 13:11:01 [adam] yes 13:11:04 [tinkster] yes 13:11:05 [cperey] yes 13:11:10 [hhalpin] RESOLVED: approved SWXG WG Weekly -- 20 May 2009 as a true record 13:11:17 [hhalpin] (although notingg that DanBri should have cleaned them up) 13:11:25 [hhalpin] PROPOSED: to meet again Wed, 3 June. scribe volunteer? 13:11:30 [hhalpin] Zakim, unmute me 13:11:30 [Zakim] hhalpin should no longer be muted 13:11:34 [mischat] i think that danbri did, as per his last email 13:11:36 [hhalpin] Zakim, mute me 13:11:36 [Zakim] hhalpin should now be muted 13:11:40 [hhalpin] Scribe? 13:11:40 [mischat] not me 13:11:43 [mischat] :) 13:11:43 [tinkster] not me 13:11:49 [hhalpin] Zakim, unmute me 13:11:49 [Zakim] hhalpin should no longer be muted 13:11:59 [mischat] we could rotate people ! 13:12:02 [adam] i can do june 10th 13:12:03 [hhalpin] Zakim, mute me 13:12:03 [Zakim] hhalpin should now be muted 13:12:03 [mischat] +1 13:12:05 [tinkster] I'm happy to scribe occasionally, but not next week. 13:12:16 [hhalpin] OK, let's have adam provisionally as scribe for next meeting. 13:12:26 [mischat] i have no voice input today, and am happy scribe as a result 13:12:29 [cperey] I will scribe in the future but not June 3 13:12:43 [hhalpin] RESOLVED: move to a scribing list 13:12:48 [hhalpin] Zakim, next item 13:12:48 [Zakim] I see nothing on the agenda 13:12:51 [hhalpin] Hmmm... 13:12:59 [hhalpin] General Organization 13:13:02 [cperey] i have no audio 13:13:12 [hhalpin] unmute me 13:13:16 [hhalpin] Zakim, unmute me 13:13:16 [Zakim] hhalpin should no longer be muted 13:13:35 [hhalpin] 13:13:39 [mischat] hhalpin: is going through last week's actions 13:13:51 [hhalpin] Zakim, unmute me 13:13:51 [Zakim] hhalpin was not muted, hhalpin 13:13:54 [mischat] apassant is a SPARQL liasion 13:13:55 [hhalpin] Zakim, mute me 13:13:55 [Zakim] hhalpin should now be muted 13:13:59 [mischat] in need of other ones ? 13:14:07 [hhalpin] Do we have any volunteers? 13:14:12 [rreck] are the groups that need liasons listed on the wiki? 13:14:16 [hhalpin] Yes. 13:14:23 [tinkster] I've put myself down for Microformats. 13:14:24 [cperey] I only hear Harry, is anyone else speaking on this bridge? 13:14:24 [Zakim] + +2 13:14:26 [mischat] are in need of liasons 13:14:36 [AlexPassant] Zakim, +2 is me 13:14:36 [Zakim] +AlexPassant; got it 13:14:36 [jsalvachua] i may try to interface with dataportability.org 13:14:42 [hhalpin] Ah, OK. 13:14:49 [hhalpin] Could you add yourselves to the wiki? 13:15:16 [mischat] jsalvachua: to be liaison with dataportability.org 13:15:25 [Zakim] + +039011228aahh 13:15:33 [mischat] toby to be liason with the microformats community 13:15:43 [hhalpin] \me Zakim, mute me 13:15:45 [petef] zakim, unmute me 13:15:45 [Zakim] petef was not muted, petef 13:15:45 [claudio] +039011228aahh is claudio 13:15:56 [hhalpin] C'mon no volunteers :( 13:15:59 [AlexPassant] q+ to ask about SIOC liaisons 13:16:02 [petef] I volunteered 13:16:07 [petef] audio problems 13:16:08 [hhalpin] Zakim, q+ 13:16:08 [Zakim] I see AlexPassant, hhalpin on the speaker queue 13:16:14 [hhalpin] Zakim, ack 13:16:14 [Zakim] I don't understand 'ack', hhalpin 13:16:17 [hhalpin] Zakim, q- 13:16:17 [Zakim] I see AlexPassant on the speaker queue 13:16:19 [petef] for Social Network Portability Group List 13:16:21 [jsalvachua] i may help with other groups, with the vcard ietf group 13:16:27 [mischat] renanto is on the wiki as the liason with "Policy Language Interest Group" 13:16:29 [hhalpin] Zakim, ack AlexPassant 13:16:29 [Zakim] AlexPassant, you wanted to ask about SIOC liaisons 13:16:30 [Zakim] I see no one on the speaker queue 13:16:36 [petef] yes 13:16:37 [tinkster] can hear 13:16:39 [mischat] yes 13:17:01 [hhalpin] \me Zakim, mute me 13:17:02 [mischat] apassant, to find a person to be SOIC liaison 13:17:17 [uldis] uldis has joined #swxg 13:17:24 [AlexPassant] Uldis Bojars or John Breslin can be SIOC liaisons 13:17:28 [AlexPassant] her's uldis 13:17:30 [cperey] no sound 13:17:32 [hhalpin] OK - could you check on them. 13:17:50 [hhalpin] ACTION: AlexPassant to see about SIOC liason 13:17:50 [trackbot] Created ACTION-15 - See about SIOC liason [on Alexandre Passant - due 2009-06-03]. 13:17:51 [uldis] sounds good 13:17:53 [mischat] AlexPassant to check to see if we can find a SIOC liaison 13:17:56 [rreck] yeah call went silent 13:18:03 [hhalpin] Zakim, unmute me 13:18:03 [Zakim] hhalpin should no longer be muted 13:18:07 [Zakim] -AlexPassant 13:18:13 [mischat] i can hear you 13:18:15 [cperey] I hear only Harry 13:18:18 [rreck] i can hear you too 13:18:26 [petef] I have added myself to wiki as volunteer liason for data portability, diso and social network portability 13:18:31 [hhalpin] ACTION: [DONE] danbri sketch a 5 line template for interaction with other groups (cf InvitedExperts, DiscussionTopics) 13:18:31 [trackbot] Sorry, couldn't find user - [DONE] 13:18:33 [Zakim] + +2 13:18:34 [petef] I *can't* speak 13:18:46 [petef] Zakim has my number wrong :-( 13:18:48 [hhalpin] ACTION: [DONE] karl to produce a template for TF deliverables. 13:18:48 [trackbot] Sorry, couldn't find user - [DONE] 13:18:50 [mischat] zakim, unmute petef 13:18:50 [Zakim] petef was not muted, mischat 13:18:59 [AlexPassant] Zakim, +2 is me 13:18:59 [Zakim] +AlexPassant; got it 13:19:03 [karl] zakim, unmute me 13:19:03 [Zakim] karl should no longer be muted 13:19:03 [hhalpin] Karl - do you wish to explain your template? 13:19:07 [mischat] Karl put together a template for the user stories 13:19:09 [hhalpin] \me Zakim, mute me 13:19:17 [mischat] karl has put up 2 templates 13:19:27 [rreck] are user stories aka use cases? 13:19:31 [petef] zakim, P24 is really me. 13:19:31 [Zakim] sorry, petef, I do not recognize a party named 'P24' 13:19:31 [mischat] yup rreck 13:19:40 [mischat] so user stories seemed too long 13:19:57 [rreck] i am developing a use case atm 13:20:02 [mischat] karl suggests we should be concise and to the point 13:20:31 [cperey] can someone put the URI to the templates into IRC 13:20:36 [tinkster] Karl's template - 13:20:42 [mischat] goal of the templates are to speed up the writing 13:20:48 [cperey] thanks Toby! 13:21:06 [mischat] templates will give us a common look and feel 13:21:15 [Zakim] + +1.631.704.aajj 13:21:27 [tinkster] zakim, mute me 13:21:27 [Zakim] tinkster was already muted, tinkster 13:21:37 [hhalpin] \me Zakim, unmute me 13:21:39 [mischat] Karl is open to comments/modifications re: the templates 13:22:09 [mischat] hhalpin happy with the user stories templates 13:22:09 [petef] how come petef.a not petef? 13:22:30 [mischat] hhalpin would like a template for the final report to be put up on the wiki 13:22:39 [mischat] harry asks if anyone has any experience in this ? 13:22:41 [hhalpin] Does anyone have a final report template as well? 13:23:12 [hhalpin] Does anyone want to take that action - i.e. finding a template for final deliverables? 13:23:21 [tinkster] microformats.org write all specs on mediawiki - perhaps useful? 13:23:21 [mischat] karl stated that someone should take an action to port the final report template to the wiki 13:23:52 [mischat] can we not look at other XGs? 13:24:13 [AlexPassant] can a chair close this action -> 13:24:22 [ivan] ivan has joined #swxg 13:24:24 [adam] i can take a stab at it 13:24:40 [hhalpin] ACTION: adam to find a good final report template and port it to the wiki 13:24:40 [trackbot] Created ACTION-16 - Find a good final report template and port it to the wiki [on Adam Boyet - due 2009-06-03]. 13:24:56 [mischat] hhalpin: adam seemed to suggest that he would look into the template 13:25:12 [hhalpin] 3. Task Forces 13:25:19 [mischat] hhalpin asks if anyone has any comments re: the organisation so we can move onto the task force issue 13:25:45 [mischat] Task force issues: should we merge context / privacy ? 13:25:47 [cperey] zakim, unmute me 13:25:47 [Zakim] cperey was not muted, cperey 13:25:55 [karl] zakim, mute me 13:25:55 [Zakim] karl should now be muted 13:25:56 [hhalpin] Context and Privacy Task Force (Karl Dubost)? For Portability and Architectures Task Force (@@)? 13:26:07 [tinkster] i/3. Task Forces/TOPIC: Task forces/ 13:26:19 [mischat] so, harry is asking if people like proposed task force titles 13:26:36 [petef] Portability and Architectures - jsalvachua and petef volunteered last telecon. 13:26:39 [mischat] cperey: has no preference for times, but thinks we need critical mass 13:26:50 [mischat] and we need a clear agenda 13:27:07 [petef] 13:27:26 [petef] zakim, unmute petef.a 13:27:26 [Zakim] petef.a should no longer be muted 13:27:29 [hhalpin] Maybe we could remind people to sign up for task forces 13:27:30 [karl] zakim, unmute me 13:27:30 [Zakim] karl should no longer be muted 13:27:30 [mischat] cperey participation has gone down since the start of XG, cperey thinks we should find out how many people are interested in each Task force 13:27:41 [jsalvachua] petef : we both may start together to push the task force 13:27:48 [mischat] who is speaking ? 13:27:51 [karl] karl *MAY* be the task force leader ;) 13:27:54 [petef] zakim, mute me 13:27:54 [Zakim] petef should now be muted 13:28:02 [cperey] zakim, mute me 13:28:02 [Zakim] cperey should now be muted 13:28:17 [mischat] should we find out how many are interested in each task force 13:28:23 [petef] I find this constant un/muting incredibly frustrating 13:28:43 [mischat] I thought we had leaders ? 13:28:55 [tinkster] Also interested in portability/arch but not leading. 13:28:59 [cperey] I agree 13:29:06 [hhalpin] Wiki page for each of these task forces? 13:29:06 [petef] where? 13:29:10 [hhalpin] I don't think we do. 13:29:13 [cperey] what next is unclear! 13:29:14 [tinkster] I can set up template wiki pages for them. 13:29:22 [mischat] hhalpin: thinks we need a wiki page for the task forces 13:29:26 [petef] I will draft one for portability and architectures 13:29:43 [hhalpin] ACTION: tinkster to draft wiki pages for task forces 13:29:43 [trackbot] Created ACTION-17 - Draft wiki pages for task forces [on Toby Inkster - due 2009-06-03]. 13:29:48 [mischat] toby to set up wiki page re task forces, and people should add what they are thinking 13:29:58 [petef] caribou, how? 13:30:34 [petef] OK, sorry, muted on skype 13:30:35 [jsalvachua] zakim, unmute me 13:30:35 [Zakim] jsalvachua should no longer be muted 13:30:36 [mischat] portability task force, so hhalpin is interested in how the W3C can promote how data can be made portable 13:30:44 [petef] Can anyone else hear feedback? 13:30:52 [mischat] zakim, mute petef.a 13:30:52 [Zakim] petef.a should now be muted 13:30:54 [karl] zakim, mute me 13:30:54 [Zakim] karl should now be muted 13:31:00 [mischat] sorry petef.a 13:31:05 [hhalpin] joaquin? 13:31:12 [jsalvachua] i am not sure if you hear me 13:31:23 [petef] No, I don't hear you 13:31:25 [mischat] no i can only hear typing 13:31:28 [rreck] i would be happy to contribute but would like to have the discussion on the list 13:31:30 [tinkster] I can just hear typing. 13:31:54 [tinkster] 13:32:01 [petef] Great 13:32:07 [mischat] jsalvachua: said that he will try and populate the wiki, regarding a roadmap for the portability task froce 13:32:43 [hajons] +q 13:32:47 [tinkster] 13:32:55 [hhalpin] 4. Invited Guests 13:32:57 [mischat] I will put some stuff on the wiki for privacy task force 13:33:06 [hajons] -q 13:33:44 [hhalpin] Anyone else? 13:34:00 [mischat] invited speaker lists have been populated for the widget topic and the Vcard topic, does anyone else have any ideas on this ? 13:34:30 [mischat] hajons, asks if people should just write to the wiki, if they want to join a task force ? 13:34:41 [petef] Yes, just sign up to task force on wiki page 13:34:42 [mischat] or is there some other protocol 13:34:43 [hajons] how do we join the task forces 13:34:49 [hajons] ok 13:34:54 [adam] maybe once the task force pages are created, each person can add themselves as a member 13:34:57 [adam] of the task force 13:35:04 [hhalpin] +1 adam 13:35:10 [mischat] hhalpin states that we should just add names to the list of task forces on the wiki 13:35:16 [rreck] i will join the privacy task force 13:35:22 [mischat] 13:35:25 [mischat] ;) 13:35:29 [rreck] i dont feel capable of leading it 13:35:30 [mischat] go toby 13:35:46 [Zakim] - +1.631.704.aajj 13:36:00 [mischat] are we going to have invited guest for privacy and context ? 13:36:20 [cperey] zakim, unmute me 13:36:20 [Zakim] cperey should no longer be muted 13:36:25 [mischat] harry wodners if there is any mobile interest 13:36:30 [mischat] from the group 13:36:35 [hajons] yes, I will propose a guest on context / mobile 13:36:46 [hhalpin] tim gave his regrets and daniel applequist can't make this meeting in particular :( 13:37:08 [hhalpin] Perhaps you can explain what the OSLO group is? 13:37:19 [hhalpin] Does it have a web-page? 13:37:20 [Zakim] + +1.631.877.aakk 13:37:31 [hhalpin] Open Sharing of LOcations 13:37:39 [mischat] christine, is interested the mobile technologies, and christine also contacted someone (?) external, and they are not interested in developing protocols 13:38:10 [mischat] OSLO group announced start earlier this year, and they are NOT interested with speaking to W3C 13:38:11 [tinkster] Open Sharing of Location-based Objects (OSLO) - 13:38:31 [hhalpin] Christine, perhaps stay in touch with them? 13:38:36 [caribou] 13:39:03 [mischat] there are others which could speak to the w3c 13:39:13 [mischat] but we need some specific questions 13:39:13 [hhalpin] Maybe brainstorm on the wiki what specific questions would be relevant to these mobile operators? 13:39:25 [mischat] so that christine could approach people 13:39:32 [mischat] who is speaking ? 13:39:40 [rreck] lubna 13:39:45 [tinkster] zakim, who's making noise 13:39:45 [Zakim] I don't understand 'who's making noise', tinkster 13:39:46 [rreck] lubna dajani 13:39:57 [rreck] she isnt on IRC yet 13:40:05 [rreck] im trying to help her in another window 13:40:06 [mischat] lubna said that she has contacts in the mobile space 13:40:07 [hhalpin] Or what is part of their problems? 13:40:15 [hhalpin] What could they need? 13:40:45 [mischat] nokia people did context and mobile stuff, and I could ask Mor Naaman to talk about the Zonetag project 13:40:52 [mischat] but i am out of touch with the mobile stuff 13:41:18 [mischat] harry asked if we could have a wiki page / content regarding the mobile space 13:41:37 [hhalpin] ACTION: cperey to add mobile companies to to Invited Guests and to brainstorm what exact questions or topics would be most interesting 13:41:37 [trackbot] Created ACTION-18 - Add mobile companies to to Invited Guests and to brainstorm what exact questions or topics would be most interesting [on Christine Perey - due 2009-06-03]. 13:41:38 [mischat] christine would like an agenda to take to the mobile experts before contacting them 13:41:53 [mischat] now moving to the next topic 13:41:55 [hhalpin] 5. Creating User Stories on the Wiki 13:41:57 [mischat] UserStories 13:42:36 [cperey] zakim, mute me 13:42:36 [Zakim] cperey should now be muted 13:42:45 [mischat] sorry my phone cut out for a bit 13:42:46 [mischat] :) 13:43:15 [karl] zakim, unmute me 13:43:15 [Zakim] karl should no longer be muted 13:43:18 [hhalpin] any more use-cases? 13:43:28 [mischat] there is some work i emailed round from a chap from cambridge 13:43:48 [mischat] which had some good examples of how privacy in the social web tends to look like 13:44:28 [mischat] karl thinks that we should add some more user stories, so that we can get a feel for what the XG should be looking at 13:44:48 [mischat] i can barely understand you karl i am sorry 13:44:51 [mischat] :) 13:44:56 [rreck] yes i will add a story 13:45:01 [hhalpin] I feel we are still missing some use-cases regarding businesses and developers 13:45:11 [rreck] i wanted to get privacy classes but no one answered my email 13:45:14 [mischat] ah excellent point 13:45:16 [karl] karl: we should take the current user stories and check them against the actual social networks such as frienfeed, facebook, etc. 13:45:24 [karl] then we can see if our cases make sense 13:45:37 [karl] and then we can identify if there are missing ones. 13:45:37 [mischat] i posted about this 13:45:42 [rreck] great idea 13:45:45 [tinkster] I've got a developer story - I'll create an action for myself. 13:45:55 [mischat] harry says that we should have a matrix 13:45:56 [rreck] social network matrix 13:46:06 [tinkster] ACTION tinkster to document developer stories on wiki. 13:46:06 [trackbot] Created ACTION-19 - Document developer stories on wiki. [on Toby Inkster - due 2009-06-03]. 13:46:10 [mischat] showing how social networks uphold privacy 13:46:17 [mischat] SO, i have pointed to a matrix 13:46:19 [caribou] almost all the user stories that we have are related to privacy/data protection 13:46:20 [karl] ACTION: karl to create the matix to be filled 13:46:20 [trackbot] Created ACTION-20 - Create the matix to be filled [on Karl Dubost - due 2009-06-03]. 13:46:21 [mischat] we have developed at garlik 13:46:22 [rreck] can someone help me find classes of privacy 13:46:32 [mischat] but it is for government institutions 13:46:42 [mischat] about what they do with your data 13:46:50 [hhalpin] mischa - could we expland the matrix to deal with commercial social networks? 13:46:53 [mischat] i could put to a similar oen for social networking sites 13:47:16 [hhalpin] Maybe we could look through alexa to get out the top social networking sites. 13:47:19 [hhalpin] I can do that... 13:47:28 [mischat] karl says that he will put up a matrix based on the current user stories 13:47:36 [hhalpin] ACTION: To retrieve top X social networking sites from the top 500 sites of Alexa 13:47:36 [trackbot] Sorry, couldn't find user - To 13:47:48 [hhalpin] ACTION: hhalpin to retrieve top X social networking sites from the top 500 sites of Alexa 13:47:48 [trackbot] Created ACTION-21 - Retrieve top X social networking sites from the top 500 sites of Alexa [on Harry Halpin - due 2009-06-03]. 13:47:55 [mischat] so that people can look at each individual social networking sites 13:48:06 [cperey] Karl, what criteria do you want for this list? 13:48:12 [cperey] the "top" social networks? 13:48:21 [karl] cperey: Alexa traffic 13:48:30 [cperey] irrelevant 13:48:33 [cperey] for mobile 13:48:37 [karl] s/cperey:/cperey,/ 13:48:45 [mischat] 13:48:48 [karl] cperey, what would be your criteria? :) 13:48:59 [hhalpin] how could we get the top X social networking sites for mobile? 13:48:59 [cperey] Number of unique users per month 13:49:06 [hhalpin] is that list available anywhere? 13:49:10 [cperey] more relevant for all types of social networks 13:49:18 [mischat] 13:49:27 [cperey] I have a list (not published) 13:49:34 [rreck] wouldnt you just select out of the top 500 alexa sites? 13:49:45 [hhalpin] I thin that list on wikipedia uses the company's own data, right? 13:49:51 [mischat] there has been some work from cambridge 13:49:54 [mischat] were they looked into 13:49:58 [karl] I will do Mixi (social network in Japan) because I have an account and they don't open account to people outside Japan 13:49:59 [mischat] social networking sites 13:50:01 [mischat] T&Cs 13:50:03 [mischat] 13:50:13 [mischat] i would like to invite the phd student which did the work 13:50:14 [cperey] this is a metrics question 13:50:16 [rreck] humming is back 13:50:25 [mischat] looking for the persons link 13:50:32 [hhalpin] I'm happy to go through alexa if someone else will merge it with wikipedia and Christine's list 13:50:33 [mischat] zakim, who is making noise ? 13:50:37 [cperey] how do you measure a social network 13:50:44 [Zakim] mischat, listening for 10 seconds I heard sound from the following: karl (5%) 13:50:47 [hhalpin] this is returning to the metric question 13:50:52 [karl] zakim, mute me 13:50:52 [Zakim] karl should now be muted 13:51:12 [cperey] OK 13:51:14 [karl] +1 for merging 13:51:18 [mischat] +1 merging 13:51:20 [rreck] +1 merging 13:51:22 [cperey] 30-50 social networks 13:51:32 [rreck] yes that sounds reasonable 13:51:32 [hhalpin] So we merge list from alexa\wikipedia with Christine's list? 13:51:32 [cperey] by country? worldwide? 13:51:37 [rreck] worldwide 13:51:44 [hhalpin] I would assume world-wide at first, and then later we can break it down by country 13:51:45 [mischat] hhalpin: asked if we could merge christine's list with the alexa rankign and the wikipedia list so as to pick 30 social networking sites 13:51:55 [hhalpin] unless your data is already broken down by country Christine 13:52:05 [cperey] I can work with Harry on this 13:52:12 [cperey] zakim unmute me 13:52:25 [hhalpin] zakim, unmute cperey 13:52:25 [Zakim] cperey should no longer be muted 13:52:31 [mischat] 13:52:48 [melvster] I would suggest you also need IM based networks such as Skype, GTalk, XMPP, if they are not included already, as they have significant usage and maturity 13:52:50 [mischat] cperey's data is broken down by country 13:53:14 [mischat] cperey: on/deck and business models. 13:53:14 [hhalpin] 30 or 50? 13:53:23 [hhalpin] Start with 30 and then build if needed? 13:53:26 [mischat] cperey would be happy with 30 13:53:29 [mischat] +1 13:53:35 [adam] +1 13:53:46 [mischat] cperey: will make a list of the last 30 13:53:47 [rreck] top based on number of users? 13:53:57 [mischat] i would like us to check their Terms and Conditions 13:54:10 [mischat] and see if they actually abide by it 13:54:37 [MacTed] MacTed has joined #swxg 13:54:38 [hhalpin] ACTION: cperey to make list of top 30 to do profiles on, to merge with hhalpin's list on alexa 13:54:38 [trackbot] Created ACTION-22 - Make list of top 30 to do profiles on, to merge with hhalpin's list on alexa [on Christine Perey - due 2009-06-03]. 13:54:54 [karl] I'll have to find someone for 13:55:04 [hajons] Christine, do you list mobile access as a feature too? 13:55:09 [Zakim] -??P20 13:55:11 [hhalpin] we do it as more of long-term action 13:55:17 [mischat] cperey: data has information regarding social networking sites, and their features 13:55:21 [mischat] not terms and conditions 13:55:24 [hhalpin] does your list have the feature? 13:55:37 [hhalpin] feature-criteria like instant-messaging, and all of that? 13:55:57 [mischat] all of the social networking sites on christine's list are mobile centric 13:56:01 [hhalpin] cperey: "PC-centric" vs. "mobile-centric" and then a continuum inbetween. 13:56:22 [karl] what is a mobile access? specific software for mobile devices? 13:56:22 [rreck] my cell does the same things as my PC 13:56:25 [hhalpin] sounds good to me 13:56:31 [mischat] cperey: will get her mobile-centric list and we should look at it 13:56:55 [hhalpin] yes rreck, but some sites don't well with mobile phones if you don't have a gphone/iphone/other dataphone 13:57:06 [mischat] and add an pc-centric social networkings site to the cperey's list 13:57:11 [Norm] Norm has joined #swxg 13:57:14 [hhalpin] can we check to see if any of these sites take advantage of user-context, like geolocation from mobile phone? 13:57:56 [mischat] (not good at scribing) 13:57:57 [mischat] ! 13:58:00 [hhalpin] Sounds great! 13:58:05 [Zakim] +Norm 13:58:15 [mischat] matrix of social networking site 13:58:18 [mischat] and their features 13:58:21 [cperey] where does this live? 13:58:36 [hhalpin] we should a manufacture a wiki page for the list 13:58:37 [mischat] we need a wiki page for this list 13:58:40 [cperey] yes, please 13:58:52 [cperey] yes, I will fill in 13:59:11 [cperey] I need to sign off and go to another meeting, bye all 13:59:20 [petef] bye cperey 13:59:25 [mischat] vcard and portability issues will have its own discussion after this call 13:59:36 [hhalpin] any more comments before we move to vcard? 13:59:42 [hhalpin] such as things we are missing? 13:59:44 [mischat] does anyone have any issues, there are lots of quiet people about? 13:59:47 [Zakim] -tinkster 13:59:54 [mischat] have we looked over anything 13:59:56 [tinkster] Dammit. My phone's gone dead. 14:00:01 [tinkster] Still on IRC though. 14:00:06 [mischat] are we missing anything obvious ? 14:00:10 [Zakim] -cperey 14:00:13 [rreck] im not sure 14:00:22 [mischat] this should be an ongoing process 14:00:30 [mischat] if you think we are missing something 14:00:36 [hhalpin] 6. Invited Guest Telecon: VCard in RDF 14:00:38 [mischat] let the mailing list know 14:00:47 [rreck] im trying to find classes of privacy and i cant be the first person to want them 14:01:12 [mischat] rreck: look at this guys work 14:01:15 ? 14:01:47 [mischat] I have to leave this call, it is 3 14:01:49 [Zakim] -karl 14:01:51 [mischat] :( 14:01:55 [hhalpin] 14:01:57 [rreck] mischat: ty 14:01:59 [hhalpin] That's the older vCard in RDF format 14:02:06 [Zakim] - +1.631.877.aakk 14:02:19 [tinkster] Aren't rdf:Bag/rdf:Alt/rdf:Seq generally seen as poor cousins of rdf:List these days? 14:02:28 [Zakim] -petef.a 14:02:44 [petef] I have to leave too, bye 14:02:45 [hhalpin] 1) No use of containers in vCard at all 14:02:51 [hhalpin] 2) Using *only* rdf:List 14:02:52 [rreck] why isnt rdf:bag enough 14:02:53 [tinkster] If the order of items is important, use rdf:List, otherwise don't use a container at all. 14:03:01 [hhalpin] 3) Letting it all be a free for all. 14:03:03 [mischat] i have to leave this chat now :( 14:03:06 [mischat] i am sorry 14:03:07 [Zakim] +??P10 14:03:13 [hhalpin] Someone else can scribe? 14:03:23 [hhalpin] Zakim, P10 is PeterMika 14:03:23 [Zakim] sorry, hhalpin, I do not recognize a party named 'P10' 14:03:28 [hhalpin] Zakim, ??P10 is PeterMika 14:03:28 [Zakim] +PeterMika; got it 14:03:35 [ivan] zakim, dial ivan-voip 14:03:35 [Zakim] ok, ivan; the call is being made 14:03:36 [Zakim] +Ivan 14:03:45 [mischat] right i am sorry i am off now, you are less a scribe now 14:03:46 [mischat] bye all 14:03:50 [Zakim] -mischat 14:03:58 [hhalpin] is there use for RDF containers? 14:04:11 [hhalpin] PeterMika: vCard is 99 percent hCard 14:05:44 [Norm] Really? Surely there are gobs of vcards out there never rendered in HTML at all 14:06:08 [hhalpin] PeterMika: vCard in RDF is fairly minimal 14:06:18 [hhalpin] PeterMika: Lots of hCard - between 1-2 billion URLs 14:06:33 [Zakim] -Carine 14:06:35 [caribou] caribou has left #swxg 14:06:45 [hhalpin] Can you share the stats on the usage of the attribute? 14:07:04 [tinkster] hhalpin: But 98% of those 1-2 billion URLs are presumably on a handful of domain names. Just a few script tweaks could change everything. 14:07:17 [Zakim] - +49.173.515.aaff 14:07:34 [timbl] timbl has joined #swxg 14:07:40 [hhalpin] should we merge or not do data-typring? 14:08:00 [timbl] Zakim, call timbl-office 14:08:00 [Zakim] ok, timbl; the call is being made 14:08:02 [Zakim] +Timbl 14:08:26 [hhalpin] so for complex structure is the older version better? 14:09:30 [hhalpin] do we need or/want to substructure? 14:09:31 [tinkster] zakim, mute me 14:09:31 [Zakim] sorry, tinkster, I do not know which phone connection belongs to you 14:09:45 [hhalpin] I thin Renato had some concern for the subset that wasn't hCard. 14:09:57 [Zakim] +tinkster 14:09:59 [tinkster] zakim, mute me 14:09:59 [Zakim] tinkster should now be muted 14:10:10 [jsalvachua] sorry i have to leave now, sorry, see you. 14:10:20 [hhalpin] And there was a big argument over round-tripping in SWIG a while back... 14:10:21 [Zakim] -jsalvachua 14:10:24 [jsalvachua] jsalvachua has left #swxg 14:10:34 [hhalpin] timbl: I use this for modelling my addresses 14:10:47 [hhalpin] timbl: these are proper vCards 14:11:19 [hhalpin] timbl: but I can see working with well-defined subset, but would like round-tripping in this subset 14:11:25 [tinkster] There are some parts of vCard which are pretty useless. 14:12:32 [hhalpin] timbl: made contact ontology 14:12:40 [hhalpin] norm: we should stick faithfully to vCard spec 14:12:50 [hhalpin] Here is some of the structuring I think: 14:13:02 [hhalpin] <vCard:EMAIL rdf: <rdf:value> corky@qqqfoo.com </rdf:value> <rdf:type rdf:resource=" "/> 14:13:14 [Norm] More precisely: I said that if we claim to model vCard, we should model it. If not, we shouldn't claim to be modeling it. 14:13:16 [hhalpin] You CAN do that with the newer hCard 14:13:21 [Zakim] +uldis 14:13:33 [tinkster] The CLASS property is useless. 14:13:33 [uldis] Zakim: mute me 14:13:37 [hhalpin] It's just it's a bit confusing because we then use v:EMAIL as a subject, not a predicate 14:13:46 [hhalpin] but we can type predicates 14:13:55 [hhalpin] this I think leads to problems with OWL-DL. 14:13:57 [tinkster] MAILER is pretty useless too. 14:13:59 [hhalpin] But I'm OK with that. 14:14:08 [tinkster] (And has been removed in latest drafts.) 14:14:26 [timbl] CLASS was for what groups it is in? 14:14:45 [tinkster] CLASS has three allowed values: PRIVATE, CONFIDENTIAL and PUBLIC. 14:15:29 [timbl] You don't need a bifg process to make a note obsolete, i think -- jsut change the Status Of This Document. 14:15:48 [hhalpin] no process issue? 14:15:53 [timbl] no process issue. 14:15:55 [hhalpin] So, then we can just merge it with Renato's? 14:16:52 [hhalpin] So, no process 14:17:20 [hhalpin] I would like to NOT have more than one URI for vCard 14:17:22 [Norm] The two are these: 14:17:30 [Norm] 14:17:37 [Norm] 14:17:45 [timbl] timbl has changed the topic to: 14:17:58 [hhalpin] Then I would like to put the SIMPLE stuff up front in Renato's, and put more difficult things involving data-structuring and rdf:List towards the end of the spec 14:18:24 [hhalpin] There's also a silly difference in capitalization 14:18:26 [hhalpin] I prefer lower-case 14:19:02 [tinkster] Newer URI comes up #1 on Google for me, searching "vcard rdf". 14:19:14 [tinkster] (without quotes) 14:19:46 [tinkster] Norm's spec==namespace URI. 14:19:57 [hhalpin] I prefer having spec URI == namespace URI and then use conneg 14:20:06 [tinkster] Rennato's namespace = 14:20:35 [hhalpin] I mean, one option 14:21:13 [hhalpin] I mean one option is that we re-use Renato's URI, then use as the namespace URI. 14:21:21 [hhalpin] And if one requests "text/html" for 14:21:52 [tinkster] +1 to timbl's suggestion of marking old one as obsolete and recommending new. 14:22:38 [hhalpin] That's the issue with TR. 14:22:44 [tinkster] hCard GRDDL profile < > uses 2006 namespace. 14:22:47 [hhalpin] Well, Norm, I think this is at least part of the community. 14:23:12 [Norm] Fair enough 14:23:39 [hhalpin] What is way forward here? 14:23:56 [ivan] q+ 14:24:41 [tinkster] +1 to just an "obsolete" note, as long as it's clear. 14:25:00 [ivan] q- 14:25:48 [timbl] Proposed action: Harry to check with Renato he is OK with: 14:26:03 [ivan] action on harry: would refer to the new version, there will be a 'previous version' link to the current one 14:26:03 [trackbot] Sorry, couldn't find user - on 14:26:29 [tinkster] ACTION hhalpin to would refer to the new version, there will be a 'previous version' link to the current one 14:26:29 [trackbot] Created ACTION-23 - would refer to the new version, there will be a 'previous version' link to the current one [on Harry Halpin - due 2009-06-03]. 14:26:40 [hhalpin] that works for me and I think Renato will agree with it. 14:26:50 [timbl] ... would be kept as the "latest-version" URI, and Reanto's veion woul dbe linke dfrom the new one as a "previous version". 14:26:58 [hhalpin] I guess the other question is we keep the "2006" namespace? 14:27:11 [ivan] q+ 14:27:20 [hhalpin] 14:27:23 [tinkster] My vote: keep both namespaces but only recommend 2006. 14:27:32 [timbl] That is Renato's namespace URI. 14:28:00 [hhalpin] Ivan: suggests namespace URIs that tend to use version causes version 14:28:07 [hhalpin] Ivan: So let's use "2006" 14:28:08 [timbl] I agree that it is unwise to use a version number in the URI 14:28:16 [Zakim] -??P5 14:28:21 [timbl] The year has no semantics 14:28:26 [tinkster] It's based on vCard 3.0. 14:28:30 [hhalpin] I am also a bit against years in URIs, but that's a minority opinion. 14:28:33 [timbl] But you can use /ns/vcard if you want 14:28:38 [hhalpin] Ivan: would prefer to vCard 14:28:57 [hhalpin] PeterMika: I would agree with "2006" 14:28:58 [timbl] or /ns/pim/adr 14:29:13 [hhalpin] For the time being let's use "2006" 14:29:18 [tinkster] Doesn't that just create yet another URI to include in SPARQL queries, etc? 14:29:40 [tinkster] We already have one too many. 14:29:53 [timbl] Ok, so keep the same 2006 ns 14:29:57 [ivan] 14:30:07 [lubna] lubna has joined #swxg 14:30:23 [hhalpin] RESOLVED: keep 14:31:06 [hhalpin] The vCard ontology needs examples. At least one that explains you can attach the properties to URIs for people, not just cards! 14:31:17 [hhalpin] we need a list of examples 14:31:21 [hhalpin] from experience. 14:31:30 [Zakim] -Hakan 14:31:33 [hhalpin] PeterMika: Transforming hCard into RDF 14:31:44 [hhalpin] PeterMika: vCard represents both a person and organization 14:32:05 [hhalpin] PeterMika: hCard is value of organization and fn are the same, the hCard is actually representing an organization and not a person 14:32:12 [timbl] That is not RDF 14:32:20 [tinkster] q+ 14:32:22 [hhalpin] PeterMika: The equivalent properties determine type of object 14:32:25 [ivan] q- 14:32:32 [timbl] So the hcrad > vcard mapping has to do some mapping 14:32:46 [hhalpin] PeterMika: so address and whatnot can all apply to person 14:32:54 [hhalpin] PeterMika: AND organization 14:32:59 [hhalpin] PeterMika: Does person have vCard etc. 14:33:06 [hhalpin] PeterMika: And then the vCard have an address etc. 14:33:17 [hhalpin] PeterMika: These are two main points people struggle with 14:33:25 [hhalpin] TimBL: The last one is the major one. 14:33:31 [hhalpin] TimBL: Are we modelling a file or person 14:33:38 [hhalpin] TimBL: The documentation leads this question open 14:34:05 [hhalpin] I'm noting unclarity about this is WHY there's no examples :) 14:34:12 [hhalpin] I could not consensus on this. 14:34:25 [timbl] s/TimBL/PeterMika/ 14:34:56 [tinkster] q- 14:34:59 [tinkster] q+ 14:35:02 [ivan] q+ 14:35:08 [timbl] I agree that one should moddl the person not the card. 14:35:16 [timbl] Like you model a book, not a library card. 14:35:17 [tinkster] zakim, unmute me 14:35:17 [Zakim] tinkster should no longer be muted 14:35:25 [adam] yes 14:35:48 [hhalpin] toby: 4.0 includes a property "kind" that demarcates between people organization and group 14:35:51 [hhalpin] +1 vCard 4.0 14:36:07 [hhalpin] timbl: which would be a functional mapping to a RDF class 14:36:28 [hhalpin] toby: individual or pre-defined group or organization 14:36:31 [AdamB] AdamB has joined #swxg 14:36:32 [hhalpin] toby: also maybe one for "place" 14:36:43 [tinkster] zakim, mute me 14:36:43 [Zakim] tinkster should now be muted 14:36:48 [timbl] Those shoudl defintely map to classses. 14:37:09 [hhalpin] how stable is vCard 4.0? 14:37:13 [hhalpin] Should we track it? 14:37:14 [tinkster] Not especially stable. 14:39:04 [tinkster] yes, certainly - should be able to attach these to Person/Organisation URIs. 14:39:28 [hhalpin] ivan: hhalpin said having the same person with several vCard is an edge-case 14:39:34 [hhalpin] ivan: since I have two addresses 14:39:51 [hhalpin] ivan: so in my phone I have two entries for my name 14:40:14 [tinkster] I think vCard 4.0 drafts have ways of representing multiple sets of contact information in one card. 14:40:27 [hhalpin] I am liking vCard 4.0 :) 14:40:40 [tinkster] e.g. this phone, this fax and this address are for one set of uses; and this phone and this address are for another. 14:40:50 [tinkster] I've not really studied that part of the syntax though. 14:41:36 [hhalpin] timbl: so you'll miss the fact that these two vCards are not the same in RDF. 14:41:39 [uldis] a person may have different vCards which they "give" to people same as one may have different versions of a business card 14:41:44 [hhalpin] timbl: about the same person 14:41:53 [hhalpin] timbl: does this mean ontology is broken/ 14:41:56 [hhalpin] q+ 14:42:01 [tinkster] q- 14:42:03 [ivan] q- 14:42:31 [hhalpin] ivan: vCard represents me or my address, I think it represents my address 14:42:42 [hhalpin] timbl: we do not walk about what a vCard represents 14:43:19 [tinkster] [ a vcard:Vcard] vcard:sameOwnerAs [a vcard:Vcard ] . 14:43:47 [tinkster] (No, vCard doesn't have a sameOwnerAs property, but we could always define such a term.) 14:43:53 [pmika] pmika has joined #swxg 14:44:03 [Norm] s/modelling/modeling/ 14:44:40 [hhalpin] so we don't drop the vCard class 14:44:55 [hhalpin] we keep it and keep domains pretty open-ended 14:45:01 [libby] I think that's because the spec's a bit ambivalent norm 14:45:06 [hhalpin] but in our examples we use People and Organizations 14:45:28 [hhalpin] This would make it clear to users, since most users will just look at examples 14:45:43 [hhalpin] timbl: we should be default not give it any class. 14:46:42 [hhalpin] should we add this to the GRDDL and the spec, this weird hCard algorithm? 14:46:49 [hhalpin] to determine people and organization? 14:47:01 [pmika] if fn=org => organization is not possible to express in OWL 14:47:14 [hhalpin] so it should be in GRDDL? 14:47:29 [hhalpin] Not express it in OWL, but *mention* it in RDF spec and then implement it in GRDDL. 14:47:38 [pmika] yes, it has to be in the hcard-to-rdf conversion 14:48:04 [hhalpin] we should make a test case here 14:48:20 [hhalpin] org class only has name and unit. 14:48:39 [hhalpin] PeterMika: So these properties should be extended to organization class 14:48:49 [hhalpin] PeterMika: ALL properties can be extended to organization class 14:49:20 [hhalpin] PeterMika: for example, "adr". Strictly, in vCard people have addresses,not organizations, but people use addresses directly on organizations in hCard. 14:49:40 [tinkster] contact:SocialEntity ~= foaf:Agent. 14:50:20 [tinkster] orgname, orgunit 14:50:30 [tinkster] orgunit can be repeated. 14:51:45 [tinkster] [ a v:vCard ; v:fn "Tim Berners-Lee" ; v:org [ a v:Organization ; v:organization-name "MIT" ; v:organization-unit "CISAL" ] ] 14:51:54 [tinkster] is how it currently works. 14:52:49 [hhalpin] OK, am a bit confused. 14:53:58 [tinkster] Organisations are messy in vCard RDF because they're messy in vCard. 14:54:27 [timbl] org MIT unit CSAIL means memberOf [ a Unit; name "CSAIL"; partOf [ a Org; name "MIT"]] 14:54:45 [tinkster] There are organisation properties which "hang off" people, plus the convention of fn==org whih means that the entire vCard represents an organisation. 14:54:46 [hhalpin] PeterMika: allow both, have examples for both cases 14:55:00 [timbl] org MIT means memberOf [ a Org; name "MIT"] 14:55:08 [hhalpin] PeterMika: show an organization where the unit is not used, just give it name and some other properties. 14:55:30 [hhalpin] PeterMika: A case where the vCard is using to describe an organization 14:55:42 [tinkster] FOAF's model for people, orgs, membership is a lot more sensible. 14:55:45 [hhalpin] TimBL: You need to spot these patterns 14:56:04 [hhalpin] PeterMika: Yes, you need to do that in transform. 14:56:12 [hhalpin] Perhaps we can add that to GRDDL. 14:56:19 [tinkster] foaf:member 14:56:21 [hhalpin] Norm? 14:56:24 [timbl] Has FOAF got org is part of biggerOrg? 14:56:38 [hhalpin] skos:widerThan :) 14:56:38 [tinkster] no, but dublin core has "hasPart/isPartOf" 14:56:41 [libby] foaf:Organization 14:56:56 [timbl] I wonder whether a gorup is a SocialEntity 14:57:08 [libby] don;t see any properties tho 14:57:14 [AlexPassant] 14:57:17 [libby] "This is a more 'solid' class than foaf:Group, which allows for more ad-hoc collections of individuals. These terms, like the corresponding natural language concepts, have some overlap, but different emphasis. " 14:57:31 [hhalpin] ok 14:57:32 [libby] not sure if there's any formal subclassing 14:57:36 [libby] probaby not 14:57:40 [timbl] Maybe socialEntity should be explicitly allowed to incldeu a group. 14:57:41 [libby] of group, that is 14:57:55 [AlexPassant] may be used together with foaf:member ? (":csail foaf:member :mit") 14:58:07 [libby] not very clear 14:58:20 [tinkster] AlexPassant: not sure of :csail foaf:member :mit. 14:58:25 [timbl] Good Q Alex 14:58:29 [uldis] re. earlier mention of SIOC - for representing organisations FOAF would be more appropriate that SIOC 14:58:45 [pmika] there is no equivalent of organization-unit in FOAF and I would not be in favor of bringing in a single FOAF class into the VCard spec 14:58:47 [tinkster] :microsoft foaf:member :w3c . 14:58:55 [timbl] FAOF more appropraite thatn SIOC? 14:59:32 [Zakim] -Timbl 14:59:37 [hhalpin] Re FOAF and VCard, I think we first fix vCard RDF spec, because that's relatively easy, and then see what the future of FOAF is in the next telecon. 14:59:51 [libby] sounds good 15:00:36 [AlexPassant] previous thread on foaf:Group / foaf:Organisation 15:00:44 [Norm] Good luck to all! See you on the next telcon 15:00:47 [Zakim] -Norm 15:00:53 [ivan] thanks to norm 15:01:24 [libby] cheers AlexPassant 15:01:28 [Zakim] -petef 15:01:56 [libby] did you get a response? doesn't look like it AlexPassant 15:02:03 [libby] perhaps bump the thread? 15:02:22 [AlexPassant] libby: unfortunately, no answer - I sent a similar one a year later but no answer as well 15:02:22 [hhalpin] republish it for next week? 15:03:19 [hhalpin] Formal note, it's an IG note, not a SWXG note. 15:03:24 [hhalpin] XGs can't do Notes even :) 15:03:42 [hhalpin] send us examples 15:03:51 [hhalpin] PeterMika will examples. 15:04:06 [hhalpin] Shall we wrap up? 15:04:10 [hhalpin] take care! 15:04:10 [Zakim] -tinkster 15:04:12 [Zakim] -Ivan 15:04:14 [ivan] ivan has left #swxg 15:04:14 [Zakim] -PeterMika 15:04:17 [Zakim] -libby 15:04:18 [Zakim] -AlexPassant 15:04:19 [Zakim] -hhalpin 15:04:19 [Zakim] -AdamB 15:04:21 [Zakim] -uldis 15:04:25 [hhalpin] Meeting adjourned 15:04:30 [hhalpin] RSSAgent, draft minutes 15:04:41 [hhalpin] RRSAgent, draft minutes 15:04:41 [RRSAgent] I have made the request to generate hhalpin 15:04:45 [Zakim] - +039011228aahh 15:04:46 [Zakim] INC_SWXG()9:00AM has ended 15:04:48 [Zakim] Attendees were +4222aaaa, AdamB, tinkster, +7.942.aabb, Hakan, hhalpin, Carine, jsalvachua, mischat, petef, libby, karl, +49.173.515.aaff, cperey, AlexPassant, +039011228aahh, 15:04:50 [Zakim] ... +1.631.704.aajj, +1.631.877.aakk, Norm, PeterMika, Ivan, Timbl, uldis 15:05:20 [tinkster] have fun! 15:06:42 [pmika] pmika has left #swxg 15:20:39 [tpa] tpa has joined #swxg 15:54:32 [tinkster] i/General Organization/TOPIC: General Organization/ 15:55:44 [tinkster] i/4. Invited Guests/TOPIC: Invited Guests/ 15:56:25 [tinkster] i/5. Creating User Stories on the Wiki/TOPIC: Creating User Stories on the Wiki/ 15:56:52 [tinkster] i/6. Invited Guest Telecon: VCard in RDF/TOPIC: Invited Guest Telecon: VCard in RDF/ 15:57:26 [tinkster] i/Convene SWXG WG meeting of 2009-05-27T13:00-15:00Z/TOPIC: Convene SWXG WG meeting of 2009-05-27T13:00-15:00Z/ 15:57:32 [tinkster] RRSAgent, draft minutes 15:57:32 [RRSAgent] I have made the request to generate tinkster 15:58:22 [tinkster] i/Convene SWXG WG meeting/TOPIC: Convene SWXG WG meeting of 2009-05-27T13:00-15:00Z/ 15:58:38 [tinkster] Chair: hhalpin 15:58:44 [tinkster] RRSAgent, draft minutes 15:58:44 [RRSAgent] I have made the request to generate tinkster 16:02:09 [tinkster] i/Excellent mischat./TOPIC: Convene SWXG WG meeting of 2009-05-27T13:00-15:00Z/ 16:02:17 [tinkster] RRSAgent, draft minutes 16:02:17 [RRSAgent] I have made the request to generate tinkster 16:04:32 [tinkster] s|SWXG WG meeting/TOPIC: Convene SWXG WG meeting|scratch this| 16:04:38 [tinkster] RRSAgent, draft minutes 16:04:38 [RRSAgent] I have made the request to generate tinkster 16:06:49 [tinkster] OK, I can't get rid of that last heading. :-( 16:11:53 [libby] hhalpin, fyi: 16:12:04 [libby] masaka uses 2006 ns 16:13:35 [oshani] oshani has joined #swxg 16:17:09 [tinkster] this might be worth a look - 16:19:03 [hhalpin] toby 16:19:13 [hhalpin] I just cleaned up the minutes 16:19:46 [hhalpin] i think it's acceptable now, thanks. 16:21:05 [tinkster] I've added some topic headings to them, but the "Convene" heading applied in the wrong place first time around, so now there are two "Convene" headings :-( 16:21:35 [hhalpin] before making any more changes, cvs ci out the latest version 16:21:42 [hhalpin] i just checked one in that merged our changes 16:21:50 [hhalpin] there may still be minor errors 16:22:07 [hhalpin] s/cvs ci/cvs co 16:22:28 [tinkster] I don't think I have CVS access to it (at least, not write access). I was just using RRSAgent's IRC interface. 16:22:36 [hhalpin] ah. 16:22:40 [hhalpin] that's why there was a cvs diff. 16:22:44 [hhalpin] we should get you cvs access. 16:23:00 [hhalpin] not sure if RRS agent does cvs differs, I think it should 16:23:10 [hhalpin] but I can get rid of last heading, where is it? At end? 16:23:38 [tinkster] It's now only showing in the TOC, but not the body of the message. 16:23:52 [hhalpin] ok, i got rid of it in toc 16:23:55 [hhalpin] let me cvs ci it back in. 16:24:53 [hhalpin] remind me to get you cvs access. 16:25:03 [hhalpin] it makes things like this a bit easier 16:25:19 [hhalpin] now check it 16:25:21 [hhalpin] should be fine 16:25:21 [tinkster] ACTION hhalpin to get tinkster CVS access. 16:26:10 [hhalpin] c'mon trackbot 16:26:11 [tinkster] There's some sort of CVS-added gunk just before the "General Organization" heading. 16:26:16 [trackbot] Created ACTION-24 - Get tinkster CVS access. [on Harry Halpin - due 2009-06-03]. 16:27:53 [hhalpin] ok, should be fixed, check again 16:30:14 [hhalpin] looks good to me 16:30:53 [tinkster] The first heading is still AWOL - in the contents, but not in the body. I think it was originally quite a bit before your "PROPOSED: to approve SWXG WG Weekly..." thing, so has probably been trimmed out. 16:33:57 [hhalpin] fixed 16:34:23 [hhalpin] ok, gotta run. 16:34:30 [hhalpin] thanks for all the help! 16:48:51 [Zakim] Zakim has left #swxg 18:02:52 [libby] libby has joined #swxg 18:08:13 [tpa] tpa has joined #swxg
http://www.w3.org/2009/05/27-swxg-irc
CC-MAIN-2017-17
refinedweb
10,352
62.41
Chrome exposes special capabilities to Extensions and Platform Apps through different APIs. The core API system is shared between Extensions and Platform Apps; APIs are defined and exposed in the same fashion for both. Before implementing a new API, it has to go through an approval process. This approval process helps ensure that the API is well defined, does not introduce any privacy, security, or performance concerns, and fits in with the overall product vision. Default: Public Extension and App APIs can either be public (available for any extension or app to potentially use, though frequently there are other constraints like requiring a permission) or private (only available to extensions or apps with a specific whitelisted ID). In general, private APIs should only be used for pieces of functionality internal to Chromium/Chrome itself (e.g., the translation utility, printing, etc). Public APIs should always be the default, in order to foster openness and innovation. Good reasons for a private API might be: Bad reasons for a private API might be: The API is only needed for a Google property (other than Chrome/Chromium). In the spirit of openness, we should, when possible, provide people with the means to build alternatives. Just because something is needed by a Google property does not mean it wouldn’t be useful to a third-party. The API is needed by a Google property (other than Chrome/Chromium), and is too powerful to expose to any third-party extension or app. Generally, if an API is too powerful to expose to a third-party extension, we don’t want to expose it to any kind of (non-component) extension, as it increases Chrome’s attack surface. Typically, these security concerns can be addressed by finding alternative or tweaking the API surface. This is just a quick-and-dirty API and we don’t want to go through a long process. Quick-and-dirty hacks have a nasty habit of staying around for years, and often carry with them their own maintenance burdens. It’s very frequently cheaper to design an API well and have it be stable than to have a quick solution that has to be constantly fixed. Unless there is a compelling reason to make an API private, it should default to public. In general, you (or rather, your team) will own the API you create. The Extensions and Apps teams do not own every API, nor would it be possible for those teams to maintain them all. This means that your team will be responsible for maintaining the API going forward. Extensions Default: All Desktop Platforms Extensions are supported on all desktop platforms (Windows, Mac, Linux, and ChromeOS). By default, an extension API will be exposed on all these platforms, but this can be configured to only be exposed on a subset. However, an API should only be restricted if there is strong reason to do so; otherwise, platforms should have parity. Platform Apps: ChromeOS Only Platform apps are deprecated on all platforms except ChromeOS. Platform apps have been deprecated on all platforms except ChromeOS. We are not actively expanding the Chrome Apps platform significantly. Starting the review process early is encouraged, and if some of the artifacts are missing or in progress, we’re happy to work with you on it. It’s better to have us review the API in principle to ensure it’s something we’re comfortable adding to the platform, and then work out the details, than to invest heavily only to find out that we don’t want to add the API to the platform. Feel free to file a bug without everything ready, or to email extension-api-reviews@chromium.org for advice and feedback! In order to propose a new API, file a new bug with the appropriate template, and fill in the required information. This should include: API Namespace: The namespace of your API. This is how your API will be exposed in JS. For instance, the chrome.tabs API has the ‘tabs’ namespace. Target Milestone: The target milestone for releasing this API. It’s okay for this to be a rough estimate. API OWNERS: Usually, you will be responsible for ownership of the API. List appropriate usernames, team aliases, etc. API Overview Doc: A link to your completed API Overview document. Design Doc(s): Any additional design docs. Depending on the complexity of the API, this might not be necessary with the API Overview doc above. Supplementary Resources (optional): Any additional resources related to this API. For instance, if this API is part of a larger feature, any PRDs, docs, mocks, etc for that feature can be linked here (or through an associated crbug.com issue). Note: This process does not eliminate the need for a larger design review, if one would otherwise be required. See go/chrome-dd-review-process for guidance (sorry, internal only. If this is a large feature, we recommend finding a member of the chrome team to help you drive it and own it). However, it should be possible to get feedback from many of the required parties during that review process, which would expedite the additional approvals needed. Please email the proposal to extension-api-reviews@chromium.org for any additional feedback. All APIs, public or private, will need sign off from a few different parties: API Review: The overall review of the API, including comments on exposed methods, events, or properties, and an overall review for whether the API fits in with the overall product vision. This sign off will come from a member of the Extensions or Apps team. Security Review: A review to ensure that the API will not pose any unnecessary security risk. Privacy Review: A review to ensure that there are no privacy concerns around leaking user data without permission. UI Review: A review of any UI implemented as part of your API. This review may not be necessary if there is no UI element to your API. Modifications to an existing API should go through a similar process. Since modifications to these APIs are frequently far-reaching, please do not skip the proposal process! However, you may be able to expedite it. In particular, small changes (like adding a new property to a method) do not need a design doc or API Overview doc. Larger changes, like adding multiple new methods and events, should still include an API Overview (though it can be brief). Medium-sized changes, like adding a single new method, are up to the discretion of the API reviewers - we may ask for an API Overview, but it might not be necessary. Do I need an API review for a private API? Yes! Private APIs are not as scrutinized as public APIs because we don't need to be as worried about API ergonomics, and we can be a little more lenient in security. However, we still need to review the API to make sure that: Who signs off for the API review? The API review bit will be flipped by either an extensions or apps team member. If your API has been languishing, please ping rdevlin.cronin@ (Extensions APIs) or benwells@ (Apps APIs). Note: All these templates default to public visibility. Extension API Modification Platform App API Modification
https://chromium.googlesource.com/chromium/src/+/24f8be7cf7077544607d7f22f69df1dfc0a189f4/extensions/docs/new_api_proposal.md
CC-MAIN-2020-40
refinedweb
1,213
63.19
I am using Dapper for a Generic DAL that can be used for both Oracle and SQL server. What would be the best way to provide Paging and Sorting methods so that it works both for SQL and Oracle without manually creating/changing the SQL statments? Something like: var users= Dapper .Query<User>(sqlStatment .Skip(10) .Take(10)); // where sqlStatment string As @Alex pointed out, paging is done differently on the two databases of your choice so your best best for having most optimized queries is to write separate queries for each. It would probably be best to create two data provider assemblies each one serving each database: And then configure your application for one of the two. I've deliberately also created Data.Provider namespace (which can be part of some Data assembly and defines all data provider interfaces (within Data.Provider) that upper couple of providers implement.
https://dapper-tutorial.net/knowledge-base/9983179/dapper-orm-and-paging-and-sorting-extension
CC-MAIN-2019-04
refinedweb
149
62.07
mount, unmount -- mount or dismount a file system Standard C Library (libc, -lc) #include <sys/param.h> #include <sys/mount.h> int mount(const char *type, const char *dir, int flags, void *data); int unmount(const char *dir, int flags); The mount() system call grafts a file system object onto the system file tree at the point dir. The argument data describes the file system object to be mounted.. By default only the super-user may call the mount() system call. This restriction can be removed by setting the vfs.usermount sysctl(8) variable to a non-zero value. executing them. This flag is set automatically when the caller is not the super-user. MNT_NOATIME Disable update of file access times. MNT_NODEV Do not interpret special files on the. MNT_SYNCHRONOUS All I/O to the file system should be done synchronously. MNT_ASYNC All I/O to the file system should be done asynchronously. MNT_FORCE Force a read-write mount even if the file system appears to be unclean. Dangerous. Together with MNT_UPDATE and MNT_RDONLY, specify that the file system is to be forcibly downgraded to a read-only mount even if some files are open for writing. structures pertaining to the specified already mounted file system. The type argument names the file system. The types of file systems known to the system can be obtained with lsvfs(1). The data argument is a pointer to a structure that contains the type specific arguments() system call disassociates the file system from the specified mount point dir. The flags argument may include MNT_FORCE to specify that the file system should be forcibly unmounted even if files are still active. Active special devices continue to work, but any further accesses to any other active files result in errors even if the file system is later remounted. specified file system ID will be unmounted. Upon successful completion, the value 0 is returned; otherwise the value -1 is returned and the global variable errno is set to indicate the error. The mount() system call will fail when one of the following occurs: [EPERM] The caller is neither the super-user nor the owner of dir. [ENAMETOOLONG] A component of a pathname exceeded 255 characters, or the entire length of a path name exceeded 1023 characters. [ELOOP] Too many symbolic links were encountered in translating a pathname. [ENOENT] A component of dir does not exist. [ENOTDIR] A component of name is not a directory, or a path prefix of special is not a directory. [EBUSY] Another process currently holds a reference to dir. [EFAULT] The dir argument points outside the process's allocated address space. The following errors can occur for a ufs file system mount: [ENODEV] A component of ufs_args fspec does not exist. [ENOTBLK] The fspec argument] The fspec argument() allocated address space. A ufs mount can also fail if the maximum number of file systems are currently mounted. lsvfs(1), mount(8), umount(8) Some of the error codes need translation to more obvious messages. The mount() and unmount() functions appeared in Version 6 AT&T UNIX. FreeBSD 5.2.1 September 8, 2003 FreeBSD 5.2.1
http://nixdoc.net/man-pages/FreeBSD/man2/mount.2.html
crawl-002
refinedweb
525
56.96
The need for BigDecimal By John O'Conner on Jul 25, 2007 by John Zukowski Working with floating point numbers can be fun. Typically, when working with amounts, you automatically think of using a double type, unless the value is a whole number, then an int type is typically sufficient. A float or long can also work out, depending upon the size of a value. When dealing with money, though, these types are absolutely the worst thing you can use as they don't necessarily give you the right value, only the value that can be stored in a binary number format. Here is a short example that shows the perils of using a double for calculating a total, taking into account a discount, and adding in sales tax. The Calc program starts with an amount of $100.05, then gives the user a 10% discount before adding back 5% sale tax. Your sales tax percentage may vary, but this example will use 5%. To see the results, the class uses the NumberFormat class to format the results for what should be displayed as currency. import java.text.NumberFormat; public class Calc { public static void main(String args[]) { double amount = 100.05; double discount = amount \* 0.10; double total = amount - discount; double tax = total \* 0.05; double taxedTotal = tax + total; NumberFormat money = NumberFormat.getCurrencyInstance(); System.out.println("Subtotal : "+ money.format(amount)); System.out.println("Discount : " + money.format(discount)); System.out.println("Total : " + money.format(total)); System.out.println("Tax : " + money.format(tax)); System.out.println("Tax+Total: " + money.format(taxedTotal)); } } Using a double type for all the internal calculations produces the following results: Subtotal : $100.05 Discount : $10.00 Total : $90.04 Tax : $4.50 Tax+Total: $94.55 The Total value in the middle is what you might expect, but that Tax+Total value at the end is off. That discount should be $10.01 to give you that $90.04 amount. Add in the proper sales tax and the final total goes up a penny. The tax office won't appreciate that. The problem is rounding error. Calculations build on those rounding errors. Here are the unformatted values: Subtotal : 100.05 Discount : 10.005 Total : 90.045 Tax : 4.50225 Tax+Total: 94.54725 Looking at the unformatted values, the first question you might ask is why does 90.045 round down to 90.04 and not up to 90.05 as you might expect? (or why does 10.005 round to 10.00?) This is controlled by what is called the RoundingMode, an enumeration introduced in Java SE 6 that you had no control over in prior releases. The acquired NumberFormat for currencies has a default rounding mode of HALF_EVEN. This means that when the remaining value is equidistant to the edges, to round towards the even side. According to the Java platform documentation for the enumeration, this will statistically minimize cumulative errors after multiple calculations. The other available modes in the RoundingMode enumeration are: CEILINGwhich always rounds towards positive infinity DOWNwhich always rounds towards zero FLOORwhich always rounds towards negative infinity UPwhich always rounds away from zero HALF_DOWNwhich always rounds towards nearest neighbor, unless both neighbors are equidistant, in which case it rounds down HALF_UPwhich always rounds towards nearest neighbor, unless both neighbors are equidistant, in which case it rounds up UNNECESSARYwhich asserts exact result, with no rounding necessary Before looking into how to correct the problem, let us look at a slightly modified result, starting with a value of 70 cents, and offering no discount. Total : $0.70 Tax : $0.03 Tax+Total: $0.74 In the case of the 70 cent transaction, it isn't just a rounding problem. Looking at the values without formatting, here's the output: Total : 0.7 Tax : 0.034999999999999996 Tax+Total: 0.735 For the sales tax the value 0.035 just can't be stored as a double. It just isn't representable in binary form as a double. The BigDecimal class helps solve some problems with doing floating-point operations with float and double. The BigDecimal class stores floating-point numbers with practically unlimited precision. To manipulate the data, you call the add(value), subtract(value), multiply(value), or divide(value, scale, roundingMode) methods. To output BigDecimal values, set the scale and rounding mode with setScale(scale, roundingMode), or use either the toString() or toPlainString() methods. The toString() method may use scientific notation while toPlainString() never will. Before converting the program to use BigDecimal, it is important to point out how to create one. There are 16 constructors for the class. Since you can't necessarily store the value of a BigDecimal in a primitive object like a double, it is best to create your BigDecimal objects from a String. To demonstrate this error, here's a simple example: double dd = .35; BigDecimal d = new BigDecimal(dd); System.out.println(".35 = " + d); The output is not what you might have expected: .35 = 0.34999999999999997779553950749686919152736663818359375 Instead, what you should do is create the BigDecimal directly with the string ".35" as shown here: BigDecimal d = new BigDecimal(".35"); resulting in the following output: .35 = 0.35 After creating the value, you can explicitly set the scale of the number and its rounding mode with setScale(). Like other Number subclasses in the Java platform, BigDecimal is immutable, so if you call setScale(), you must "save" the return value: d = d.setScale(2, RoundingMode.HALF_UP); The modified program using BigDecimal is shown here. Each calculation requires working with another BigDecimal and setting its scale to ensure the math operations work for dollars and cents. If you want to deal with partial pennies, you can certainly go to three decimal places in the scale but it isn't necessarily. import java.math.BigDecimal; import java.math.RoundingMode; public class Calc2 { public static void main(String args[]) { BigDecimal amount = new BigDecimal("100.05"); BigDecimal discountPercent = new BigDecimal("0.10"); BigDecimal discount = amount.multiply(discountPercent); discount = discount.setScale(2, RoundingMode.HALF_UP); BigDecimal total = amount.subtract(discount); total = total.setScale(2, RoundingMode.HALF_UP); BigDecimal taxPercent = new BigDecimal("0.05"); BigDecimal tax = total.multiply(taxPercent); tax = tax.setScale(2, RoundingMode.HALF_UP); BigDecimal taxedTotal = total.add(tax); taxedTotal = taxedTotal.setScale(2, RoundingMode.HALF_UP); System.out.println("Subtotal : " + amount); System.out.println("Discount : " + discount); System.out.println("Total : " + total); System.out.println("Tax : " + tax); System.out.println("Tax+Total: " + taxedTotal); } } Notice that NumberFormat isn't used here, though you can add it back if you'd like to show the currency symbol. Now, when you run the program, the calculations look a whole lot better: Subtotal : 100.05 Discount : 10.01 Total : 90.04 Tax : 4.50 Tax+Total: 94.54 BigDecimal offers more functionality than what these examples show. There is also a BigInteger class for when you need unlimited precision using whole numbers. The Java platform documentation for the two classes offers more details for the two classes, including more details on scales, the MathContext class, sorting, and equality. Posted by Daniel on July 27, 2007 at 02:09 AM PDT # Posted by Matt on July 30, 2007 at 02:49 AM PDT # I've worked with BigDecimal a lot, and these negatives are far outweighed by the positives of having incorrect results. Posted by Frank on July 30, 2007 at 10:44 PM PDT # The problem mentioned above, about the operands overloading will be solved in jdk 7. Posted by Victor Ramirez on September 02, 2007 at 12:48 PM PDT # Victor, are you sure about that? I thought this was still up for debate? Personally I don't believe operator overloading is necessary in Java. If you need it that bad just use C++. Posted by LSM on September 18, 2007 at 10:09 PM PDT #
https://blogs.oracle.com/CoreJavaTechTips/entry/the_need_for_bigdecimal
CC-MAIN-2015-32
refinedweb
1,293
51.55
You can subscribe to this list here. Showing 1 results of 1 I'm in the process of upgrading Webware to version 1.0.2 from version 0.8.4 We currently have a patch that we apply to the older version to add in socket close-on-exec within the ThreadedAppServer, like so: --- Webware.orig/WebKit/ThreadedAppServer.py Thu Mar 21 11:40:40 2002 +++ Webware/WebKit/ThreadedAppServer.py Tue Oct 1 14:11:28 2002 @@ -26,6 +26,7 @@ import select import socket import threading +import fcntl import time import errno import traceback @@ -105,6 +106,7 @@ # @@ 2001-05-30 ce: another hard coded number: @@jsl- not everything needs to be configurable.... self.mainsocket.listen(1024) + fcntl.fcntl(self.mainsocket.fileno(), fcntl.F_SETFD, 1) self.recordPID() print "Ready\n" We are running FreeBSD 6.1 Do we need to reapply this patch to Webware 1.0.2, or was ThreadedAppServer rewritten to avoid this problem? -Justin Akehurst Justin Akehurst Software Design Engineer - UI Isilon Systems, Inc P +1-206-315-7500 F +1-206-315-7501 D +1-206-315-7576 The Proven Leader in Scale-out NAS Simplicity and Value. Guaranteed
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200906&viewday=26
CC-MAIN-2014-52
refinedweb
194
57.87
Started as a simple charting tool for monitoring a snow depth near the owner's country house in Norway, Highcharts quickly became one of the most popular visualization libraries. It provides a lot of great built-in interactive features and is easy to use. In this tutorial we are going to build a simple e-commerce dashboard with Cube.js and Highcharts. We’ll use the main Highcharts library, as well as Maps, Stock, and Solid Gauge modules. Please keep in mind that Highcharts libraries are available under different licenses, depending on whether it is intended for commercial/government use, or for personal or non-profit projects. Make sure to check its license page. Below, you can see the demo of the dashboard that we’re going to build. You can find a live demo here and the source code is available on Github. To implement this example, we’ll need: - Database (we will use PostgreSQL) with sample data - Cube.js backend to handle communications between our database and the frontend - The frontend application (we will build one with React) Analytics Backend We’re going to use a PostgreSQL database and an example e-commerce dataset. Use the following commands to download and import the example dataset. $ curl > ecom-dump.sql $ createdb ecom $ psql --dbname ecom -f ecom-dump.sql Next, let’s install the Cube.js CLI and create a new project. $ npm -g install cubejs-cli $ cubejs create highcharts -d postgres Cube.js uses environment variables inside the .env file for configuration. Update the contents of the .env file with your own database credentials. CUBEJS_DB_NAME=ecom CUBEJS_DB_TYPE=postgres CUBEJS_API_SECRET=SECRET Now, let’s start with the Cube.js backend. $ npm run dev At this step, you can find the Cube.js playground at. Here, you can see all the tables from our database and we can choose any of them to generate the schema. The Cube.js Data Schema concept is based on multidimensional analysis and should look familiar to those with experience in OLAP cubes. The two main entities are measures and dimensions: dimensions are ”as is” properties that we get from our database, but measures are results of aggregation operations like count, sum, average, and others. In this example, we need an orders and users table. Please, check it and click “Generate Schema.” Cube.js will then generate Orders.js and Users.js files inside the schema folder. Cube.js data schema is javascript code and can be easily edited. You can also dynamically generate schema, if needed. Let’s update the schema/Users.js file. We’ll keep only state, id dimensions, and count measure because we’ll need to use them in our example. cube(`Users`, { sql: `SELECT * FROM public.users`, dimensions: { state: { sql: `state`, type: `string` }, id: { sql: `id`, type: `number`, primaryKey: true } } }); That’s it for our backend. We’ve configured the database and created the Cube.js. backend. Now, we’re ready to start working on our frontend application. Frontend Dashboard with Highcharts Let’s generate our app with Cube.js templates. Navigate to the Dashboard App tab and select “Create custom application” with React and Ant Design. It will take some time to create a dashboard app and install dependencies. Once it is finished, you should see the dashboard-app folder inside your project’s folder. Next, let’s install the dependencies we’ll need. Run the following commands in the dashboard-app folder. $ cd dashboard-app $ npm install --save highcharts highcharts-react-official @highcharts/map-collection The command above installs the following dependencies: Feel free to remove all the files inside the src folder and the page folder, as well as update the dashboard/index.js file with the following content. import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import * as serviceWorker from './serviceWorker'; ReactDOM.render( <React.StrictMode> <App></App> </React.StrictMode>, document.getElementById('root') ); serviceWorker.unregister(); Our application will have the following structure: Appas the main app component Dashboardcomponent that stores data and manages the app state Map, Line, Stock, and other chart components that manage chart render according to applications data and state. Let’s create the <Dashboard /> component in the dashboard-app/src/components/Dashboard.js file with the following content. (We’ll create the <Map /> component later): import React from 'react'; import { Layout } from 'antd'; import { useCubeQuery } from '@cubejs-client/react'; import Map from './Map'; const Dashboard = () => { const { resultSet } = useCubeQuery({ measures: ['Orders.count'], dimensions: ['Users.state'], timeDimensions: [ { dimension: 'Orders.createdAt', dateRange: 'last year', }, ], }); if (!resultSet) { return “Loading…”; } const data = regions.tablePivot().map(item => [item['Users.state'], parseInt(item['Orders.count'])]) return ( <Layout> <Map data={data} /> </Layout> ); }; export default Dashboard; In the above snippet, we did several things. We imported useCubeQuery React hook first. import { useCubeQuery } from "@cubejs-client/react"; Next, to render the amount of orders in each state, we need to change the data into Highcharts’ format, where the first element is the state key and the second element is the value. [ ["us-ca",967], ["us-ny",283], ["us-wa",239], ["us-il",205], ["us-tx",190] ] We’re using resultSet.tablePivot() to access data returned from the backend and to prepare it for rendering. const data = regions.tablePivot().map(item => [item['Users.state'], parseInt(item['Orders.count'])]) Now, we’re ready to pass our data to the Map chart. Let’s create a new dashboard-app/src/components/Map.js file with the following content. import React, { useState, useEffect } from 'react'; import Highcharts from 'highcharts'; import HighchartsReact from 'highcharts-react-official'; import highchartsMap from 'highcharts/modules/map'; import mapDataIE from '@highcharts/map-collection/countries/us/us-all.geo.json'; highchartsMap(Highcharts); const staticOptions = { chart: { styledMode: true, }, credits: { enabled: false, }, title: { text: 'Orders by region<small>Highcharts Map API</small>', useHTML: true, }, colorAxis: { min: 0, }, tooltip: { headerFormat: '', pointFormat: ` <b>{point.name}</b>: {point.value}`, }, colorAxis: { minColor: '#FFEAE4', maxColor: '#FF6492', }, series: [ { name: 'Basemap', mapData: mapDataIE, borderColor: '#FFC3BA', borderWidth: 0.5, nullColor: '#FFEAE4', showInLegend: false, allowPointSelect: true, dataLabels: { enabled: true, format: '{point.name}', color: '#000', }, states: { select: { borderColor: '#B5ACFF', color: '#7A77FF', }, }, }, ], }; export default ({ data }) => { const [options, setOptions] = useState({}); useEffect(() => { setOptions({ ...staticOptions, series: [ { ...staticOptions.series[0], data: data, }, ], }); }, [data]); return ( <HighchartsReact highcharts={Highcharts} constructorType={'mapChart'} options={options} /> ); }; Inside the Map.js file, we imported useState, useEffect hooks, and a bunch of Highcharts components. Then, we defined chart options based on Highcharts Map API specs. In staticOptions, we can set map styling, source, data, event handlers, and other options. Highcharts has a wide selection of SVG maps to use. We’ve picked this one. Lastly, we merged our staticOptions and props.data and then passed it to the Highcharts component. That’s all for our <Map/> component. Now, we just need to update the ‘dashboard-app/App.js’ to include the <Dashboard /> component: + import Dashboard from './components/Dashboard'; - <Header /> - <Layout.Content>{children}</Layout.Content> + <Dashboard /> ...and we’re ready to check out our first chart! Navigate to in your browser, and you should be able to see the map chart that we’ve just built. A similar workflow can be used to create other chart types, as in the GIF below. - Define the static chart options, according to Highcharts API documentation. - Add data to options.series. - Pass options to the Highcharts component. The full source code of the above dashboard is available on Github, and you can check the live demo here. I hope you’ve found this tutorial helpful. If you have any questions or any kind of feedback, please let me know in this Slack channel.
https://statsbot.co/blog/react-highcharts-example/
CC-MAIN-2020-34
refinedweb
1,258
51.24
. Features - 🎣 Easy to use, React Cool Form is a set of React hooks that helps you conquer all kinds of forms. - 🗃 Manages complex form data without hassle. - 🪄 Manages arrays and lists data like a master. - 🚦 Supports built-in, form-level, and field-level validation. - 🚀 Highly performant, minimizes the number of re-renders for you. - 🧱 Seamless integration with existing HTML form inputs A tiny size (~ 7.2kB gizpped) library but powerful. Quick Start To use React Cool Form, you must use react@16.8.0 or greater which includes hooks. This package is distributed via npm. $ yarn add react-cool-form # or $ npm install --save react-cool-form Here's the basic concept of how it rocks: import { useForm } from "react-cool-form"; const Field = ({ label, id, error, ...rest }) => ( <div> <label htmlFor={id}>{label}</label> <input id={id} {...rest} /> {error && <p>{error}</p>} </div> ); const App = () => { const { form, mon } = useForm({ // (Strongly advise) Provide the default values just like we use React state defaultValues: { username: "", email: "", password: "" }, // The event only triggered when the form is valid onSubmit: (values) => console.log("onSubmit: ", values), }); // We can enable the "errorWithTouched" option to filter the error of an un-blurred field // Which helps the user focus on typing without being annoyed by the error message const errors = mon("errors", { errorWithTouched: true }); // Default is "false" return ( <form ref={form} noValidate> <Field label="Username" id="username" name="username" // Support built-in validation required error={errors.username} /> <Field label="Email" id="email" name="email" type="email" required error={errors.email} /> <Field label="Password" id="password" name="password" type="password" required minLength={6} error={errors.password} /> <input type="submit" /> </form> ); }; ✨ Pretty easy right? React Cool Form is more powerful than you think. Let's explore it! Discussion (0)
https://dev.to/wellyshen/introducing-react-cool-form-react-hooks-for-forms-state-and-validation-less-code-more-performant-2hhc
CC-MAIN-2021-39
refinedweb
288
50.12
Created on 2011-04-23.23:28:05 by swhite, last changed 2014-10-05.16:39:17 by zyasoft. If the column type in the ResultSet is reported as NULL then the DataHandler code doesn't bother trying to call any of the get methods, as the value must be NULL. Unfortunately it does then call wasNull(). This is invalid (see the Note in the Javadocs for java.sql.ResultSet), you can only use wasNull if you've previously called a get method. There are several easy fixes to this. One would be to call a get method, even though we know what the returned value will be. I've implemented an alternative, that bypasses the problematic wasNull check by returning from the middle of the switch statement. Another approach would be to put a guard around the 'wasNull' call below the switch statement, avoiding calling it if the value is already null or Py.None. This issue causes real problems when using the org.sqlite.JDBC driver (both the zentus and xerial implementations seems to cause problems). The patch includes a test that shows the problematic case, though I can only get it to fail with sqlite and not with other databases. I had to add sqlite to test.xml to run the test, and sqlite doesn't pass all the other tests. Sadly, although the patch fixes an obvious problem (the Type.NULL code is clearly wrong and should be fixed) it doesn't fix all the problems that occur when using this code with sqlite. It looks like the column type reported through the metadata from sqlite changes based on the stored value, not based on the type name or type affinity of the column. This means that that, although the patch fixes the real-world situation where sqlite returns a single row containing a NULL, it doesn't fix the case that there are multiple returned rows the first of which contains NULL. This causes sqlite to return Types.NULL for the first row, and not unreasonably DataHandler.java assumes this type applies to all the values in the column. A test along the following lines still fails under sqlite, even with the patch. def testNullReturnQuery(self): """testing that a resultset containing a NULL value doesn't break.""" c = self.cursor() try: c.execute("insert into zxtesting (id, name, state) values (100, NULL, 'xx')") self.db.commit() c.execute("select name from zxtesting where id = 100 or id = 1 order by id desc"); f = c.fetchall() assert len(f) == 2, "expecting two rows" data = f[0] assert data[0] is None, "expecting None/NULL returned, was %r" % (data,) data = f[1] assert data[0] is not None, "expecting non-NULL value returned, was %r" % (data,) finally: c.close() The only fix I can think of is to re-query the type for each value returned, i.e. rather than pass the type parameter into the getPyObject method, have it obtain it each time via something like: int type = stmt.getMetaData().getColumnType(col) This doesn't seem an ideal thing to have to do in generic JDBC driver code but it does make things work better for sqlite case. Revisit with zxJDBC to jyjdbc migration
http://bugs.jython.org/issue1741
CC-MAIN-2015-35
refinedweb
539
64.91
Why glass pane requires setLightWeightPopupEnabled(false)?800351 Dec 10, 2007 11:44 AM In the code below, the glass pane is a JPanel which everybody knows is a Swing lightweight component. Then why JComboBox requires setLightWeightPopupEnabled(false) for its proper functioning? What on earth does the method, in the first place? Why glass pane? What oddity in the hell does the glass pane have that other Swing component doesn't, never? import javax.swing.*; import java.awt.*; public class GlassPaneOddity{ public static void main(String[] args){ JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel gp = new JPanel(new BorderLayout()); frame.setGlassPane(gp); JComboBox cb = new JComboBox(new String[]{"alpha", "beta", "gamma", "delta"}); cb.setLightWeightPopupEnabled(false); // why this line is so critical? gp.add(cb, BorderLayout.NORTH); frame.setSize(300, 300); frame.setVisible(true); gp.setVisible(true); } } This content has been marked as final. Show 8 replies 1. Re: Why glass pane requires setLightWeightPopupEnabled(false)?800351 Dec 13, 2007 12:51 AM (in response to 800351)Why this question continues to be of 'No replies'? Is it because my English is so bad? If it is I will rewrite the question and repost it. 2. Re: Why glass pane requires setLightWeightPopupEnabled(false)?794342 Dec 13, 2007 8:26 AM (in response to 800351) Why this question continues to be of 'No replies'?I doubt that would ever be the case, your English is better than most. Is it because my English is so bad? This forum has been having huge problems of late, posts not showing, then appearing some time later (often a significant delay). I never saw the original post - perhaps there was a delay, and when finally showing, showed up on page 2 (but who knows, with what's going on here lately). Anyway, the behaviour looks a bit strange. If you comment out //cb.setLightWeightPopupEnabled(false); and add gp.setOpaque(false); you see the popup (underneath the glasspane) - works as normal, unless you add a mouseListener to the glasspane if you add a JPopupMenu to the glasspane, and do not include setLightWeight.... the popup shows above the glasspane fine. So its not consistent - definitely strange to me. Hopefully someone else has a bit of insight into it. 3. Re: Why glass pane requires setLightWeightPopupEnabled(false)?843806 Dec 13, 2007 10:11 AM (in response to 794342)Hello, For me a JPopupMenu is not shown, unless the popup boundaries go outside the boundaries of the JFrame. Which is normal because every popup (also for combo boxes) is shown as heavy weight popup (a new window) if it has to be shown outside of the boundaries of the top level ancestor (which is a lot heavier). The popup mechanism will automatically decide this. Unless you specify with the method setLightWeightPopupEnabled(false) that it should always show the popup in a separate window. That is why the method is so crucial. But I don't know which layer or panel the popups are painted in exactly when mechanism decides to simulate it as light weight popup. Hope it helps, Marcel 4. Re: Why glass pane requires setLightWeightPopupEnabled(false)?800351 Dec 13, 2007 10:54 AM (in response to 843806)Thanks Michael and Marcel. Still a mystery, though. :) 5. Re: Why glass pane requires setLightWeightPopupEnabled(false)?843806 Dec 13, 2007 4:52 PM (in response to 800351)What is the mystery? 6. Re: Why glass pane requires setLightWeightPopupEnabled(false)?camickr Dec 13, 2007 5:21 PM (in response to 843806) What is the mystery?According to Michael's observation the combobox popup is painted "below" the glass pane but a regular popup is painted on top of the glass pane. Whether this behaviour makes sense or not it is documented in the Swing tutorial on "Using Root Panes". The first diagram shows the relationship between the layered pane and the glass pane. The layered pane section shows the default layers in a layered pane: 7. Re: Why glass pane requires setLightWeightPopupEnabled(false)?kirillcool Dec 13, 2007 6:30 PM (in response to camickr) 8. Re: Why glass pane requires setLightWeightPopupEnabled(false)?843806 Dec 13, 2007 6:34 PM (in response to camickr)This is a quite interesting subject: The JRootPane has basically 2 components (although its layout is managing a few more): - the glasspane - the layered pane The z-order of components inside containers (so also for JRootPane) is defined to be: - index 0 is highest - index 1 is lower but higher than 2 etc... The glasspane is added always on index 0 (meaning highest) The layered pane is added after (meaning lower) Inside the layered pane are added: - the content pane - the menubar And the layout of the rootpane (RootLayout) takes care of the positioning of the glasspane, layered pane, content pane and the menubar. The layered pane has a layer for content (containing contentpane and menubar) but also for popups. The popups are shown on top of the content pane, but always under the glasspane. As far as I can see this seems to be in line with the explanation under the rootpane tutorial. And in line with Michael's observation. Hope it helps, Marcel
https://community.oracle.com/thread/1366094
CC-MAIN-2020-10
refinedweb
856
66.33
After installing Snow Leopard, Pages will not open a saved document used in Leopard, due to the above error notice. Anyone else have this problem? (with hopefully a fix.) Thanks Intel IMac Core 2 Duo, Mac OS X (10.6) 1. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 6:34 AM in response to randyhatchAs I already asked in some other threads but it seems that every user want to have its own thread, this behavior resemble to a wrong install process. How have you installed the package? (1) using the installer (2) copying files from an old install or from a backup If you copied the files, I guess that you copied the iWork '09 folder of the Applications folder but not the iWork '09 folder containing shared items which must be available as: <startupVolume>:Library:Application Support:iWork '09: It is supposed to contain the folder "Frameworks" containing these folders: SFAnimation.framework SFArchiving.framework SFApplication.framework SFRendering.framework SFWebView.framework SFUtility.framework NumbersExtractorHeaders.framework SFLicense.framework SFInspectors.framework Inventor.framework SFDrawables.framework SFWordProcessing.framework SFTabular.framework SFCompatibility.framework SFCharts.framework SFProofReader.framework SFStyles.framework SFControls.framework MobileMe.framework Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 15:34:19 2. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 3, 2009 6:58 AM in response to randyhatchThanks for the rebuff on starting a new thread, since I couldn't find a thread that addressed my specific problem. The problem only began after upgrading to Snow Leopard. Pages was working fine under Leopard. The error message points specifically to the SFWordProcessing plugin. All the files you list are in their appropriate places. Still not resolved. 3. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 7:30 AM in response to randyhatchYour problem under Snow Leopard: The same before Snow Leopard: Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 16:30:10 4. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 3, 2009 8:03 AM in response to randyhatchYvan, Problem still not addressed adequately. It has been a long time since I have done a clean install of the OS to my hard drive. It looks as if it is time. 5. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 8:38 AM in response to randyhatchBefore doing a new system install, try to run Pages from an other user account. If it works, a system install is not required. If it doesn't, it is more complex. As I wrote in other threads (1) problems where reported with cache files (2) problems where reported with a few fonts (3) problems where reported with permissions Case (1) may require a re-install because at this time, tools able to rebuild caches are not updated for Snow. Cases (2) and (3) may be cured without a re-install. Of course I assume that you start checking if the problem is specific to Pages or if it strikes other apps like TextEdit for instance. Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 17:38:03 6. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 3, 2009 9:45 AM in response to randyhatchYvan, Thanks for your responses. Pages and other programs are working fine. The problem seems to be with a large document - my personal journal - with imbedded photos. Other smaller documents, and starting a new document, work fine. Other user account doesn't solve the problem. 7. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 10:06 AM in response to randyhatchTry to send it to my mailbox. Click my blue name to get my address. Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 19:06:14 8. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiMerged Content 1 Sep 7, 2009 9:33 PM in response to KOENIG YvanI am having precisely the same problem as RandyHatch, and I too find the other links provided by Yvan to be useless. In particular, I am using Pages 3.0.3 (part of iLife '08), which had been installed from the installation CD. After upgrading from 10.5.8 to 10.6, the my large 200-page file would no longer open in Pages. I have tried reinstalling iWork '08 to see if that would help fix the problem. No luck. I tried copying the file to my wife's computer, which has iLife '09 installed, and the document yielded the same error (albeit in a much shorter timespan---instantly rather than within 60s on 3.0.3). I would most appreciate a helpful response. This only seems to affect my 274kb text file, not my slightly smaller 250kb text file, leaving me to believe this is not an install issue or other user-caused issue like Yvan seems to imply. 9. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 8, 2009 3:28 AM in response to Merged Content 1It seems that you are not uptodate in reading my responses. I received Randy's file and for this unique file the problem is clearly an oddity linked to 10.6. If your document is readable, at this time, on a 10.5.8 machine and not on a 10.6 one, it's the 2nd example of the problem. It would be useful to get the two states of the document. May you attach them to a mail and send them to my mailbox. Click my bluename to get my address. Comparing the two documents, I would be able to insert some chunks of text allowing me to isolate what is the wrongdoer. Randy's doc is so huge that I can't do this kind of experiments on it. Switching it from one machine to the other one is too boring. With 274Kb it would be acceptable. Yvan KOENIG (VALLAURIS, France) mardi 8 septembre 2009 12:25:55 10. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiMerged Content 1 Sep 8, 2009 9:54 PM in response to KOENIG YvanWell, since this forum's admins apparently think an actual solution to a problem is less important than how the message is worded, I'll repost my solution to this problem... It turns out the problem is a font issue. Hoefler Text and large files apparently don't mix well, at least in this case. Perhaps the font became corrupted in the 10.6 installation? Changing the document to Times in the few seconds before the crash occurred seemed to fix the issue. Hopefully others having the same problem can use this tip to avoid stress, rather than be lectured in incoherent ramblings which only exacerbate aggravation. 11. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 9, 2009 5:17 AM in response to Merged Content 1Unfortunately, the document that I'm trying to use will not open in Snow Leopard at all. Therefore, I am unable to manipulate text or anything else. It is good to know that it is a font issue, but I can't solve my particular problem. I will probably need to return my OS to Leopard and wait for updates to Snow Leopard to fix the bug. 12. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugifruhulda Sep 9, 2009 7:48 AM in response to randyhatchHave you looked at your fonts in the Fontbook? In the File menu you can Validate fonts. If they are OK keep them, the other should be trashed. 13. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 9, 2009 9:10 AM in response to fruhuldaYes, to no avail. 14. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 9, 2009 10:14 AM in response to fruhulda fruhulda wrote: Have you looked at your fonts in the Fontbook? In the File menu you can Validate fonts. If they are OK keep them, the other should be trashed. The problem is linked to the official Hoefler font. At this time I am working on Randy's document. When it will be done, I will make systematic tests to determine the limit of document size pushing Pages to crash when the doc uses Hoefler on 10.6. During my work upon Randy's doc, I wrote a script allowing me to grab the name of embedded pictures with their native size and their displayed size. The grabbed infos may be used to resize the pictures out of Pages reducing drastically the document's size. In the given file, replacing huge jpegs by png files of the used size reduce seriously the doc's size with no loss on screen. Here is the script. -- -- [SCRIPT listOfPicturesWithSize] (* Save the script as Application (Application Bundle if you want to use it on a MacIntel) Store it on the Desktop. Drag and drop the icon of a Pages document on the script icon. You may also double click the script's icon. The script will build a list of embedded picture files. The list is available on the desktop in "listof_embeddedpictures.txt" Open it then copy its contents at end of the Pages document so that the page numbering will not be modified. If you want to put the list at the beginning, insert a page break to create a target blank page before running the script. Yvan KOENIG (Vallauris, FRANCE) 2009/09/09 *) property permitted4 : {"com.apple.iWork.pages.sffpages", "com.apple.iWork.pages.sfftemplate"} property permitted5 : {"com.apple.iwork.pages.pages", "com.apple.iwork.pages.template"} property permitted : permitted5 & permitted4 property nomDuRapport : "listof_embeddedpictures.txt" property typeID : missing value property wasFlat : missing value property listePages : {} property listeImages : {} property liste1 : {} --===== on run (* lignes exécutées si on double clique sur l'icône du script application • lines executed if one double click the application script's icon *) tell application "System Events" if my parleAnglais() then set myPrompt to "Choose an iWork’s document" else set myPrompt to "Choisir un document iWork" end if -- parleAnglais if 5 > (system attribute "sys2") then (* if Mac Os X 10.4.x *) set allowed to my permitted else (* it's Mac OS X 10.5.x with a bug with Choose File *) set allowed to {} end if -- 5 > (system… my commun(choose file with prompt myPrompt of type allowed without invisibles) (* un alias *) end tell end run --===== on open (sel) (* sel contient une liste d'alias des élémentsqu'on a déposés sur l'icône du script (la sélection) • sel contains a list of aliases of the items dropped on the script's icon (the selection) *) my commun(item 1 of sel) (* an alias *) end open --===== on commun(thePack) (* • thePack is an alias *) local pFold, pName, path2Index, texteXML my nettoie() set theTruePack to thePack as text tell application "System Events" to tell disk item theTruePack set typeID to (type identifier) as text set pFold to package folder set pName to name end tell if typeID is not in my permitted then if my parleAnglais() then error "“" & (docIwork as text) & "” is not a Pages document!" else error "«" & (docIwork as text) & "» n’est pas un document Pages !" end if -- my parleAnglais() end if -- typeID if pFold then set wasFlat to false else set wasFlat to true set thePack to my expandFlat(thePack) (* an alias *) log thePack end if try set path2Index to (thePack as text) & "Index.xml" set texteXML to my lis_Index(path2Index) (* here code to scan the text *) set line_feed to ASCII character (10) if texteXML contains line_feed then set texteXML to my remplace(texteXML, line_feed, return) if texteXML contains return then set texteXML to my remplace(texteXML, return & return, return) set texteXML to my supprime(texteXML, tab) set balise1 to "<sl:page-group sl:page=" & quote set balise2 to "<sf:media" set balise3 to "<sf:naturalSize sfa:w=" & quote set balise4 to "<sf:data sf:path=" & quote set balise5 to quote & "/>" set balise7 to "sf:size=" & quote if texteXML does not contain balise4 then if my parleAnglais() then error "The document “" & theTruePack & "” doesn’t embed picture !" number 8001 else error "Le document « " & theTruePack & " » ne contient pas d’image !" number 8001 end if else if texteXML contains balise1 then copy "nomImage" & tab & "page #" & tab & "naturalWidth" & tab & "naturalHeight" & tab & "trueWidth" & tab & "trueHeight" & tab & "fileSize (kBytes)" to end of my listeImages set my listePages to my decoupe(texteXML, balise1) repeat with i from 2 to count of my listePages set itmi to item i of my listePages set pageNum to item 1 of my decoupe(itmi, quote) set my liste1 to my decoupe(itmi, balise2) repeat with k from 2 to count of my liste1 set itmk to item k of my liste1 if itmk contains balise4 then -- on a une image set liste3 to my decoupe(itmk, balise3) set liste3 to my decoupe(item 2 of liste3, quote) set wNatural to item 1 of liste3 set hNatural to item 3 of liste3 set w to item 5 of liste3 set h to item 7 of liste3 set liste3 to my decoupe(itmk, balise4) set itmk to item 1 of my decoupe(item 2 of liste3, balise5) set nomImage to item 1 of my decoupe(itmk, quote) set fSize to item -1 of my decoupe(itmk, balise7) copy nomImage & tab & pageNum & tab & wNatural & tab & hNatural & tab & w & tab & h & tab & (fSize div 1024) to end of my listeImages end if set liste3 to {} end repeat end repeat end if -- balise4 set rapport to (my recolle(my listeImages, return)) as text set p2d to path to desktop set p2r to (p2d as Unicode text) & nomDuRapport tell application "System Events" if exists (file p2r) then delete (file p2r) make new file at end of p2d with properties {name:nomDuRapport} end tell -- "System Events" write rapport to (p2r as alias) if wasFlat then tell application "System Events" to delete file (thePack as text) -- delete temporary package tell application "Numbers" to open file p2r on error error_message number error_number my nettoie() if error_number is not -128 then my affiche(error_message) end try my nettoie() end commun --===== on nettoie() set typeID to missing value set wasFlat to missing value set my listePages to {} set my listeImages to {} set my liste1 to {} end nettoie --===== on affiche(msg) tell application "Finder" activate if my parleAnglais() then display dialog msg buttons {"Cancel"} default button 1 giving up after 120 else display dialog msg buttons {"Annuler"} default button 1 giving up after 120 end if -- parleAnglais() end tell -- application end affiche --===== on lis_Index(cheminXML0) local cheminXML0, cheminXMLgz, txtXML set cheminXMLgz to cheminXML0 & ".gz" tell application "System Events" if exists file cheminXMLgz then if exists file cheminXML0 then delete file cheminXML0 (* un curieux a pu dé-gzipper le fichier • someone may have gunzipped the file *) my expand(cheminXMLgz) set txtXML to my lisIndex_xml(cheminXML0) else if exists file cheminXML0 then set txtXML to my lisIndex_xml(cheminXML0) else if my parleAnglais() then error "Index.xml missing" else error "Il n'y a pas de fichier Index.xml" end if -- parleAnglais() end if -- exists file cheminXMLgz end tell -- to System Events return txtXML end lis_Index --===== on expand(f) do shell script "gunzip " & quoted form of (POSIX path of (f)) end expand --===== on lisIndex_xml(f) local t try set t to "" set t to (read file f) end try return t end lisIndex_xml --===== on expandFlat(f) (* f is an alias *) local zipExt, qf, d, nn, nz, fz, qfz tell application "Finder" to set newF to (duplicate f without replacing) as alias -- create a temporary item which will be changed as package set zipExt to ".zip" set qf to quoted form of POSIX path of newF tell application "System Events" tell disk item (newF as text) set d to path of container set nn to name set nz to nn & zipExt set fz to d & nz if exists disk item fz then set name of disk item fz to nn & my horoDateur(modification date of file fz) & zipExt set name to nz end tell -- disk item make new folder at end of folder d with properties {name:nn} end tell -- System Events set qfz to quoted form of POSIX path of fz do shell script "unzip " & qfz & " -d " & qf & " ; rm " & qfz return newF (* path to the temporary package *) end expandFlat --===== (* • Build a stamp from the modification date_time *) on horoDateur(dt) local annee, mois, jour, lHeure, lesSecondes, lesMinutes set annee to year of dt set mois to month of dt as number (* existe depuis 10.4 *) set jour to day of dt set lHeure to time of dt set lesSecondes to lHeure mod 60 set lHeure to round (lHeure div 60) set lesMinutes to lHeure mod 60 set lHeure to round (lHeure div 60) return "_" & annee & text -2 thru -1 of ("00" & mois) & text -2 thru -1 of ("00" & jour) & "-" & text -2 thru -1 of ("00" & lHeure) & text -2 thru -1 of ("00" & lesMinutes) & text -2 thru -1 of ("00" & lesSecondes) (* • Here, the stamp is "_YYYYMMDD-hhmmss" *) end horoDateur --===== on decoupe(t, d) local l set AppleScript's text item delimiters to d set l to text items of t set AppleScript's text item delimiters to "" return l end decoupe --===== on recolle(l, d) local t set AppleScript's text item delimiters to d set t to l as text set AppleScript's text item delimiters to "" return t end recolle --===== (* replaces every occurences of d1 by d2 in the text t *) on remplace(t, d1, d2) local l set AppleScript's text item delimiters to d1 set l to text items of t set AppleScript's text item delimiters to d2 set t to l as text set AppleScript's text item delimiters to "" return t end remplace --===== (* removes every occurences of d in text t *) on supprime(t, d) local l set AppleScript's text item delimiters to d set l to text items of t set AppleScript's text item delimiters to "" return (l as text) end supprime --===== on parleAnglais() local z try tell application "Pages" to set z to localized string "Cancel" on error set z to "Cancel" end try return (z is not "Annuler") end parleAnglais --===== -- [/SCRIPT] -- Yvan KOENIG (VALLAURIS, France) mercredi 9 septembre 2009 19:14:25
https://discussions.apple.com/thread/2141021?start=0&tstart=0
CC-MAIN-2014-23
refinedweb
3,061
51.52
Brian Leonard works as a senior software engineer with Sun Microsystems. He's been working with application servers before there was a J2EE standard, helping develop applications as well as the servers that run them. Until most recently, Brian's been focused on helping large enterprises implement and deploy highly-available architectures. In his current role, Brian is an evangelist for the NetBeans open source IDE. The). Currently, the application does not provide the means to delete a comment. Now, I know this application is severely lacking any sort of user model or administrative interface, but my intent is to show how to do cool things with Rails, not create a replacement for roller, so I squeeze the features in where I can. In this case, I'm going to add a trash can icon to the page and allow users to drag comments to the trash for deletion. Here we'll employ the draggable_element helper, a Rails wrapper around the Scriptaculous Draggable object, to make the comments draggable. <% Posted on <%= comment.created_at.strftime("%B %d, %Y at %I:%M %p") %> </div></li><%= draggable_element(comment_id, :revert=>true) %> <%= text_area 'comment', 'comment' %><%= image_tag "trash.jpg", :id=>'trash'%> Here we'll use the drop_receiving_element helper, a Rails wrapper around the Scriptaculous Droppables.add method to give the comment a drop destination and call the action to delete the comment. <%= drop_receiving_element('trash', # The id of the receiving element :accept => "comment", # The CSS class of the dropped element :with => "'comment=' + (element.id.split('_').last())", # The query string parameters :url => {:action=>:trash_comment} # The action to call)%> <li class="comment" id=<%= comment_id %> > def trash_comment comment_id = params[:comment] Comment.delete(comment_id) render :update do |page| page.replace_html "comment_#{comment_id}", "" end end <% Posted on <%= comment.created_at.strftime("%B %d, %Y at %I:%M %p") %> </div> </li></div><%= draggable_element(comment_id, :revert=>true) %> <script> function fill_trash() { $('trash').src = "/images/trashfull.jpg"; } function empty_trash() { $('trash').src = "/images/trash.jpg"; } </script> <%= drop_receiving_element('trash', # The id of the receiving element :accept => "comment", # The CSS class of the dropped element :with => "'comment=' + (element.id.split('_').last())", # The query string parameters :url => {:action=>:trash_comment}, # The action to call :onHover => "function() {fill_trash()}", :complete => "empty_trash()")%> <%= image_tag "trash.jpg", :id=>'trash', :onMouseOut=>"empty_trash()"%> RubyWeblogDND.zip vu[tab] vl[tab] class User < ActiveRecord::Base validates_uniqueness_of :username validates_length_of :username, :within => 5..10 end In addition to the server side validation that's being performed, we're going to add some client side validation using Ajax. We'll do this by displaying a message to the right of the field on the validity of what's been entered. Since we're validating against the database, we will need to make an XMLHTTPRequest back to the server. We can use Rails' observe_field helper method for this task. <%= f.text_field :username %><span id="message"></span> <%= observe_field :user_username, # The field to observe :with => "username", # The input to validate :frequency => 0.25, # The frequency in seconds to watch for changes :url => {:action => :validate }, # The action to call when changes occur :update => :message # The DOM ID to update %> def validate#{message}</b>" render :partial=>'message' end <%= javascript_include_tag 'prototype' %> Firebug is your friend for troubleshooting Rails applications. With Firebug enabled, you can see the requests as you type: Forgetting to include the Prototype libraries: Misspelling the field to observe: Forgetting to include the Prototype libraries: Misspelling the field to observe: autovalidation.zip? Unlike with Rails, the NetBeans Groovy plugin does not come bundled with Groovy and the Grails framework - these must be pre-installed on your system (don't worry, it's easy): Or as Grails prefers to call them, Domain classes. class Post { String title } class BlogController { def scaffold = Post } Our blog needs a body field. class Post { String title String body }: Like Rails, validation in Grails is very straightforward. class Post { String title String body static constraints = { title(blank:false) body(blank:false) } } Okay, enough with this dynamic scaffolding. Let's generate our controller and view code. Unfortunately, for this step I haven't found the menu option in NetBeans yet, so we'll resort to the command line for now. <h1>The Groovy Blog</h1> <g:each <h2>${post.title}</h2> <p>${post.body}</p> <small><g:linkpermalink</g:link></small> <hr> </g:each> <g:each GrailsWebLog.zip April 2008 March 2008 January 2008 December 2007 September 2007 August 2007 July 2007 June 2007 April 2007 March 2007 January 2007 December 2006 November 2006 June 2006 May 2006 April 2006 March 2006 February 2006 January 2006 December 2005 November 2005 September 2005 August 2005 July 2005 June 2005 April 2005 March 2005 February 2005 Drag & Drop with Rails Autovalidation with Rails Blogging for Dollars (and T-Shirts)
http://weblogs.java.net/blog/bleonard/
crawl-001
refinedweb
774
53.41
26 May 2012 00:11 [Source: ICIS news] HOUSTON (ICIS)--A ?xml:namespace> The 40 cent/lb decrease represents a 27% reduction from $1.47/lb, the price at which the two producers settled their contracts in May. The two nominations undercut a 24% drop proposed by another supplier, which on Monday offered a 35 cent/lb reduction from its $1.45/lb settlement in May, putting its June contracts at $1.10/lb. A fourth US suppliers, which settled at $1.55/lb in May, nominated an 18 cent/lb drop for June. The reduction would put its contracts at $1.37/lb next month, but one source predicted the supplier may eventually lower its contract prices to $1.15-1.20/lb. ($1 = €0.80) For more on butadiene
http://www.icis.com/Articles/2012/05/26/9564304/US-BD-maker-matches-rival-with-proposed-40-centlb-June.html
CC-MAIN-2015-18
refinedweb
131
68.77
The time module This module provides a number of functions to deal with dates and the time within a day. It’s a thin layer on top of the C runtime library. A given date and time can either be represented as a floating point value (the number of seconds since a reference date, usually January 1st, 1970), or as a time tuple. Getting) The tuple returned by localtime and gmtime contains (year, month, day, hour, minute, second, day of week, day of year, daylight savings flag), where the year number is four digits, the day of week begins with 0 for Monday, and January 1st is day number 1. Converting time values to strings You can of course use standard string formatting operators to convert a time tuple to a string, but the time module also provides a number of standard conversion functions: # Converting strings to time values On some platforms, the time module contains a strptime function, which is pretty much the opposite of strftime. Given a string and a pattern, it returns the corresponding time tuple: # File: time-example-6.py import time # make sure we have a strptime function! try: strptime = time.strptime except AttributeError: from strptime import strptime print strptime("31 Nov 00", "%d %b %y") print strptime("1 Jan 70 1:30pm", "%d %b %y %I:%M%p") The time.strptime function is currently only made available by Python if it’s provided by the platform’s C libraries. For platforms that don’t have a standard implementation (this includes Windows), here’s a partial replacement: # File: strptime.py import re import string MONTHS = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] SPEC = { # map formatting code to a regular expression fragment "%a": "(?P<weekday>[a-z]+)", "%A": "(?P<weekday>[a-z]+)", "%b": "(?P<month>[a-z]+)", "%B": "(?P<month>[a-z]+)", "%C": "(?P<century>\d\d?)", "%d": "(?P<day>\d\d?)", "%D": "(?P<month>\d\d?)/(?P<day>\d\d?)/(?P<year>\d\d)", "%e": "(?P<day>\d\d?)", "%h": "(?P<month>[a-z]+)", "%H": "(?P<hour>\d\d?)", "%I": "(?P<hour12>\d\d?)", "%j": "(?P<yearday>\d\d?\d?)", "%m": "(?P<month>\d\d?)", "%M": "(?P<minute>\d\d?)", "%p": "(?P<ampm12>am|pm)", "%R": "(?P<hour>\d\d?):(?P<minute>\d\d?)", "%S": "(?P<second>\d\d?)", "%T": "(?P<hour>\d\d?):(?P<minute>\d\d?):(?P<second>\d\d?)", "%U": "(?P<week>\d\d)", "%w": "(?P<weekday>\d)", "%W": "(?P<weekday>\d\d)", "%y": "(?P<year>\d\d)", "%Y": "(?P<year>\d\d\d\d)", "%%": "%" } class TimeParser: def __init__(self, format): # convert strptime format string to regular expression format = string.join(re.split("(?:\s|%t|%n)+", format)) pattern = [] try: for spec in re.findall("%\w|%%|.", format): if spec[0] == "%": spec = SPEC[spec] pattern.append(spec) except KeyError: raise ValueError, "unknown specificer: %s" % spec self.pattern = re.compile("(?i)" + string.join(pattern, "")) def match(self, daytime): # match time string match = self.pattern.match(daytime) if not match: raise ValueError, "format mismatch" get = match.groupdict().get tm = [0] * 9 # extract date elements y = get("year") if y: y = int(y) if y < 68: y = 2000 + y elif y < 100: y = 1900 + y tm[0] = y m = get("month") if m: if m in MONTHS: m = MONTHS.index(m) + 1 tm[1] = int(m) d = get("day") if d: tm[2] = int(d) # extract time elements h = get("hour") if h: tm[3] = int(h) else: h = get("hour12") if h: h = int(h) if string.lower(get("ampm12", "")) == "pm": h = h + 12 tm[3] = h m = get("minute") if m: tm[4] = int(m) s = get("second") if s: tm[5] = int(s) # ignore weekday/yearday for now return tuple(tm) def strptime(string, format="%a %b %d %H:%M:%S %Y"): return TimeParser(format).match(string) if __name__ == "__main__": # try it out import time print strptime("2000-12-20 01:02:03", "%Y-%m-%d %H:%M:%S") print strptime(time.ctime(time.time())) (2000, 12, 20, 1, 2, 3, 0, 0, 0) (2000, 11, 15, 12, 30, 45, 0, 0, 0) Converting time values Converting a time tuple back to a time value is pretty easy, at least as long as we’re talking about local time. Just pass the time tuple to the mktime function: # File: time-example-3.py import time t0 = time.time() tm = time.localtime(t0) print tm print t0 print time.mktime(tm) (1999, 9, 9, 0, 11, 8, 3, 252, 1) 936828668.16 936828668.0 Unfortunately, there’s no function in the 1.5.2 standard library that converts UTC time tuples back to time values (neither in Python nor in the underlying C libraries). The following example provides a Python implementation of such a function, called timegm: # File: time-example-4.py import time def _d(y, m, d, days=(0,31,59,90,120,151,181,212,243,273,304,334,365)): # map a date to the number of days from a reference point return (((y - 1901)*1461)/4 + days[m-1] + d + ((m > 2 and not y % 4 and (y % 100 or not y % 400)) and 1)) def timegm(tm, epoch=_d(1970,1,1)): year, month, day, h, m, s = tm[:6] assert year >= 1970 assert 1 <= month <= 12 return (_d(year, month, day) - epoch)*86400 + h*3600 + m*60 + s t0 = time.time() tm = time.gmtime(t0) print tm print t0 print timegm(tm) (1999, 9, 8, 22, 12, 12, 2, 251, 0) 936828732.48 936828732 In 1.6 and later, a similar function is available in the calendar module, as calendar.timegm. Timing things The time module can be used to time the execution of a Python program. You can measure either “wall time” (real world time), or “process time” (the amount of CPU time the process has consumed, this far). # File: time-example-5.py import time def procedure(): time.sleep(2.5) # measure process time t0 = time.clock() procedure() print time.clock() - t0, "seconds process time" # measure wall time t0 = time.time() procedure() print time.time() - t0, "seconds wall time" 0.0 seconds process time 2.50903499126 seconds wall time Not all systems can measure the true process time. On such systems (including Windows), clock usually measures the wall time since the program was started. Also see the timing module, which measures the wall time between two events. The process time has limited precision. On many systems, it wraps around after just over 30 minutes.
http://www.effbot.org/librarybook/time.htm
CC-MAIN-2016-36
refinedweb
1,082
62.88
BOOT(9) BSD Kernel Manual BOOT(9) boot - halt or reboot the system #include <sys/reboot.h> void boot(int howto); The boot() function handles final system shutdown, and either halts or reboots the system. The exact action to be taken is determined by the flags passed in howto and by whether or not the system has finished auto- configuration. If the system has finished autoconfiguration, boot() does the following: 1. Sets the boothowto system variable from the howto argument. 2. If this is the first invocation of boot() doshutdownhooks(9). 6. Prints a message indicating that the system is about to be halted or rebooted. 7. If RB_HALT is set in howto, halts the system. Otherwise, re- boots the system. If the system has not finished autoconfiguration, boot() runs any shut- down hooks by calling doshutdownhooks(9), prints a message, and halts the system (unless RB_USERREQ is specified, in which case the system will be halted if RB_HALT is given, and rebooted otherwise; see reboot(2) for more details). reboot(2), doshutdownhooks(9), resettodr(9), vfs_shutdown(9) MirOS BSD #10-current November 13,.
https://www.mirbsd.org/htman/i386/man9/boot.htm
CC-MAIN-2014-10
refinedweb
184
55.74
Tiny Trick For ViewState Backed Properties This might be almost too obvious for many of you, but I thought I’d share it anyways. Back in the day, this was the typical code I would write for a value type property of an ASP.NET Control that was backed by the ViewState. public bool WillSucceed { get { if (ViewState["WillSucceed"] == null) return false; return (bool)ViewState["WillSucceed"]; } set { ViewState["WillSucceed"] = value; } } I have seen code that tried to avoid the null check in the getter by initializing the property in the constructor. But since the getters and setters for the ViewState are virtual, this violates the warning against calling virtual methods in the constructor. You also can’t initialize it in the OnInit method because the property might be set declaratively which happens before Init. With C# 2.0 out, I figured I could use the null coalescing operator to produce cleaner code. Here is what I naively tried. public bool WillSucceed { get { return (bool)ViewState["WillSucceed"] ?? false; } set { ViewState["WillSucceed"] = value; } } Well of course that won’t compile. It doesn’t make sense to apply the null coalescing operator on a value type that is not nullable. Now if I had stopped to think about it for a second, I would have realized how simple the fix would be, but I was in a hurry and quickly moved on and dropped the issue. What an eeediot! All I had to do was move the cast outside of the expression. public bool WillSucceed { get { return (bool)(ViewState["WillSucceed"] ?? false); } set { ViewState["WillSucceed"] = value; } } I am probably the last one to realize this improvement and everyone reading this is thinking, “well duh!”. But in case there is someone out there even slower than me, here you go! And if I spend this much time trying to write a property, you gotta wonder how I get anything done. ;) 19 responses
https://haacked.com/archive/2006/08/07/tinytrickforviewstatebackedproperties.aspx/
CC-MAIN-2018-34
refinedweb
317
63.9
RobotC Advanced Concepts SSI Robotics September 7, 2013 Capitol College Topics • Variables Types • bool, byte, char, float, long, int, short, string, word, ubyte, void • Conditional Statements • do-while, for, if-else, switch, while • #ifdef #else #endif • Initialization • Header files • Functions • Accidental program abortion • Automatic brick power down • Autonomous Input • NXT Buttons • Tasks • Waiting • Motor Functions • bFloatDuringInactiveMotorPWM • bMotorReflected Variable Types • bool - 0=false, everything else is true • byte - [-128,127] - 8 bits • char - same as byte, but the value represents a character • float - 32 bits • long - [-2147483648,2147483647] - 32 bits • int - [-32768, 32767] - 16 bits • short - same as int • string - array of characters - null terminated • word - same as int • ubyte - [0,255] - 8 bits • void ASCII Table • 42 = ‘B’ Conditional Statements • if-else - determines which action is performed based on a test (NXT button example) • switch- determines which action is performed based on a value (autonomous activation example) • do-while - performs an action one or more times (NXT button example) • while - performs an action zero or more times based on a test (infinite manual loop example) • for - performs an action zero or more times based on an incremental value (accelerometer initialization example) Source Code Structure JoystickDriver.c Installs with Robot C File | New | New File - OR – Window Explorer | Right Click | New | Text Document accelerometer.h common.h Autonomous.c Manual.c File | New | Autonomous Template File | New | User Control Template Autonomous vs. Manual • Initialization • Functions • Algorithm Selection • Tasks • Wait For Start • Infinite Loop • Process Joystick & Sensor Input • Drive Motors • Etc. • Initialization • Functions • Algorithm Selection • Tasks • Wait For Start • Infinite Loop • Process Sensor Input • Drive Motors • Etc. Initialization • Brick • nNxtExitClicks • nPowerDownDelayMinutes • disableDiagnosticsDisplay (i.e. bDisplayDiagnostics) • Sensors • Reset motor encoders • Motors • bFloatDuringInactiveMotorPWM • bMotorReflected (discussion only, no example) • Servos • servo and servoTarget are equivalent functions. initializeRobot • InitializeRobot is a function provided in the RobotC template (File | New | Autonomous Template or File | New | User Control Template). • It is an example of an in-file function. • It has no return value; it accepts no parameters. • It is called by task main. • initializeRobot has been renamed ‘initialize_autonomous’ in the autonomous template and ‘initialize_manual’ in the manual template for the sake of this presentation. • By default, it does nothing. Useful, huh? Well after you add some code to it, it will be. bDisplayDiagnostics • This resides in JoystickDriver.h. • It is set to true by default, so diagnostics (external battery level, NXT battery level, FMS message count, and file name) will be displayed on the NXT screen. • Set it to false or call disableDiagnosticsif you want to exclusively use the screen. • For this presentation, it has been set to false in the initialization function. nNxtExitClicks • nNxtExitClicks is an intrinsic word defined in RobotCIntrinsics.c. If you want more information on intrinsic words, see • It is used to prevent accidental program abortion. • Syntax: nNxtExitClicks = 3; • The dark grey button will need to be pushed 3 times in rapid succession to exit the currently running application. • Replace ‘3’ with some other number [-32768, 32767] to change its behavior. Note, if you choose a very large number (-1 for example), then you might as well pull the battery, stop the program with Robot C, etc… • This statement can be placed in initialize_autonomous and initialize_manual, but there is a better way. nPowerDownDelayMinutes • Set this to zero to prevent the brick from sleeping during the match. This tells the brick to never fall asleep. • The inspection process includes a step that sets the brick’s sleep timer to never. • If bricks are swapped, please visit inspection again, but also make sure that the sleep timer is set to never before a match. • Similar NXT Battery and Power Controls • These aren’t used, but might be fun reading • alive, powerOff, bNoPowerDownOnACAdapter, bNxtRechargable, LowVoltageBatteryCountLimits, nAvgBatteryLevel, nImmediateBatteryLevel, externalBattery bFloatDuringInactiveMotorPWM • When set to true, the motors will float or coast when power is not applied. • When set to false, the motors will brake when power is not applied. • This is useful when you have spinning motors at the end of the match. When set to true, the motors will slow to a stop, which prevent gear boxes from cracking. • Careful, it is applied to ALL motors! bMotorReflected • This intrinsic word can be used instead of using the Motors and Sensor Setup menu. • Syntax: bMotorReflected[name_of_motor] = true • Set it to true and the applied power level will be automatically negated. • Set it to false and the direction indicated by the sign of the power (positive drives the motor in one direction; negative, the other) is not modified. Functions • Functions can reside in the same file as task main or reside in some other file, usually with a .h extension (‘h’ stands for header). Though RobotC often includes ‘.c’ files into other ‘.c’ files. • Suppose you want the exit click functionality to apply to both the autonomous main and manual main tasks. • Perform the following • Create a file called common_initialization.h. • Add the common_initialization function. • Move the ‘nNxtExitClicks’ statement from initialize_autonomous and initialize_manual to the new common_initialization function. • Include the common_initialization header file into the autonomous and manual ‘.c’ files. • Call common_initialization from task main (both autonomous and manual). • Note that the return statement is optional. • Now, that’s slick. External header files can also be used to manage multiple autonomous routines. Algorithm Selection • Suppose you have multiple autonomous routines, but you don’t want to use RobotC or the program chooser at the beginning of every match. • Instead of writing the autonomous routines as separate ‘task main’s, write them using functions and call them from a single task main (in the autonomous template). • The functions can all be in the same file as task main, all in a separate file (ex. auto_routines.h), or each in its own file (ex. auto_1.h, auto_2.h, etc.). If they are in separate files, remember to include the function’s file(s) into the autonomous .c file. • The buttons on the brick can be used to select which autonomous function that ‘task main’ calls. NXT Buttons • nNxtButtonPressed indicates which button, if any, is being pressed. • A value of 3 indicates that the large orange button is being pressed; 2 is the light grey button to the left of the orange button; 3, the light grey button to the right of the orange button; 0, the dark grey button below the orange button; -1, no button is being pressed. • Note that the button code is in a do-while loop. That loop executes so quickly, that multiple button presses would register. The toggle variable prevents this. Sensors (An Accelerometer Example) • Suppose you want to maintain a list of the last 16 readings from an accelerometer. This can be incorporated into the main task, but… • It will be harder to maintain this code as the size of your program grows larger. • The same code might be needed in autonomous and manual operation. • The initialize_the_accelerometer function resides in accelerometer.h. • It requires a driver supplied in the RobotCexample ‘drivers’ folder called hitechnic-accelerometer.h. You can copy and paste that file into your source directory or modify the include statement to indicate the location of the driver’s header file. • #include “Sample Programs/NXT/3rd Party Sensor Drivers/drivers/hitechnic-accelerometer.h” Sensor Pragmas • The motor and sensors dialog can be used to configure the accelerometer to a sensor port. • Do NOT use HiTechnic Sensors | HiTechnicAccel. • The pragmas at the top of autonomous.c (and manual.c) need to have a line for the sensor. • Note the third parameter “Accelerometer”. It is the name assigned to the sensor on sensor port 1. • The initialize_the_accelerometer and update_sensors functions require the third parameter. • It is case sensitive. • It is important that changes made to the pragmas in one template are applied appropriately to the other template. • An exception are pragmas that aren’t used by the other template, but in GENERAL, the auto and manual should be the same. Tasks • Tasking allows code to be placed into a “function” that runs “concurrently” with the main task. • Ten tasks at most can be executed concurrently within an NXT user program. • The StartTask intrinsic function takes one parameter, the name of the task. • The t_update_sensors task calculates the average accelerometer values over the past 16 iterations. • It can be started before the waitForStart call. • Tasks can be started there as long as robot movement doesn’t result. Robots must stay within the 18 inch cube until started and they must not move parts until AFTER waitForStart is called. waitForStart • Both the autonomous and manual templates call waitForStart. This macro executes until the FCS sends the begin-the-match signal. • Both templates then use an infinite loop to execute code to make the robot do something. • What can be done in that loop? Let’s see! The Infinite Loop • This is where input is processed. Input can come from sensors on the robot or commands from the joystick controllers. • This is where you will place autonomous and manual code. If you have written your autonomous routine as a task, this loop is still required, so the task will not be killed when task main returns. • For example, the autonomous functions display the values of the global variables maintained by t_update_accelerometer. • To allow other tasks a chance to execute, a wait can be inserted that causes the main task to “sleep”. • “wait10Msec(X);” - replace ‘X’ with the amount of sleep time in the statement. • abortTimeslice (); // Allows other waiting threads to execute. • Let’s look at some manual code. Processing Joystick Controller Joysticks • To obtain joystick controller input, use the intrinsic function getJoystickSettings. • There are four joysticks available at a competition: two on controller 1 (x and y) and two on controller 2 (x and y). • The y-axis is away and toward the human operator and has the range [+127,-128]. Negative is toward the human, positive is away from the human. • A joystick very RARELY equals zero when it is being neither pushed nor pulled. • Using the highest 99 positive values and lowest 99 negative values of the joystick to drive a DC motor provides a dead-zone (i.e. a power of zero is applied to the motor). • [109,127] drives the motor in the “positive” direction at a power level of [1,100]. • [-109,-127] drives the motor in the “negative” direction at a power level of [1,100]. • This is a very simple algorithm for mapping joystick positions to motor power. There are others to drive the robot at a slow speed and high speed, exponential speeds, etc. • The x-axis is left and rightand has the range [-128, 127]. Negative is left, positive is right. Processing Joystick Controller Buttons • To obtain joystick controller button values, use the intrinsic function getJoystickSettings (only a single call is necessary to obtain the joystick and button values). • The joy1Btn and joy2Btn functions return a boolean, which indicates whether a button is being pressed. The controller buttons are numbered 1-10. Supply these as parameters. • The presentation code uses joy1Btn (5) != false). If the button is being pressed then the first block of code will be executed. If false, then the second block. Many compilers (i.e. processors) can test “!=“ faster than “==“. Keep in mind that the conditional will need to test for the opposite condition. Processing Sensor Data • Many times, sensors can be processed directly in the infinite loop. • Run a motor until a touch sensor has been triggered. • Turn a light on when the magnetic sensor detects a magnet. • Sometimes, it is better to place the code in a task as in the accelerometer example. References RobotC.net • Webpages • • Forums • SSI Robotics • Webpages • • E-Mail • ftcquestions@ssirobotics.the-spanglers.net
https://www.slideserve.com/minowa/robotc-advanced-concepts
CC-MAIN-2022-33
refinedweb
1,942
56.25
Playing “devil’s advocate” is very much in my nature, so in the absence of external input, I’ll have debates with myself. I’ve been in a constant debate lately as to the role of source control in the development process. Necessary evil? Useful tool? Glorified backup mechanism? Thin layer over diff/patch and friends? Projects like Arch (Sorry, GNU Arch these days) go a long way toward proving that “thin layer on top of other existing tools” can be an accurate description (well, and CVS being a layer on RCS and many other such systems). Does that mean anything though? Is source control just whatever can keep versions of files? I’ve had times in the past (especially using CVS, which I still have a love/hate relationship with) where trying to use my source control’s branch/merge support was infuriating – I fell back to tools I knew (patch/diff/etc) that worked as expected with simpler command-lines (mainly because I had more work to do, and no time to play around until I got the desired results – if you don’t do an operation much, you forget the command-line structure for it easily). So source control is a conceptual layer on existing tools, the first layer that adds file revision as a concept. Maybe it also adds things like the concept of pended changes, maybe it adds atomic changesets, maybe it adds versioned renames (yeah, yeah, we have all those), whatever, but you have a tree (or set of trees) with versioned contents. The mechanics of these concepts aren’t really interesting for the typical user. I’m slowly starting to realize that it’s the usability of the features that really make or break a system. I think I learn this lesson slowly because I’ve spent so much (too much) time evaluating software systems like many people do cars or other types of purchases – bullet points of capabilities. Yes, based on all the listed bells and whistles. Far too little focus on actual usability of said bells and whistles. Our development of Hatteras both is and has been very scenario-based, thankfully, so we’re always trying to make sure the end user usage is as nice as possible. work. More to the point, though: What do you think? How do you regard source control? How about particular source control systems? Which do you hate? Which do you love? Which features of whatever source control systems make your development lives better or worse? Most other source control systems have had a few to dozens of releases to get lots of great customer feedback. We do a lot to get some feedback so we should hopefully get a 1.0 that’s really close, but it still keeps us up at night. J I think StarTeam is the best configuration management tool, bar none. Granted, CM is more than just version control, but I always found StarTeam to be great at basic version control., as well. ESPECIALLY w.r.t. branching. You’ve just got to get your mind around the ‘View’ metaphore which drvies StarTeam — no easy feat for anyone with an RCS (or RCS-like) upbringing. On the other hand, I find CVS to be painfully unintuitive when it comes to branching and tagging. I’ve screwed up CVS repositories with relative ease; never happened with StarTeam. I always thought it funny that people would say an advantage of CVS is that it uses text files, not some fancy, proprietary repository. "You can just edit the text files to fix things!" they proclaim. Well, I never NEEDED to fix things in StarTeam (or even ClearCase), but I have had to fix CVS lots of times. CVS NEEDS to be in plain text files, because it can be so screwy that you will HAVE to fix it at some point (IMHO). Now, all of that being said, I personally use CVS because I work for a small ISV and we cannot afford a full-blown StarTeam deployment right now (server + clients + SQL Server repository, backup agents, etc. It adds up quickly). We use CVSNT + WinCVS + CVS Proxy from PushOk software, for VS.NET integration THAT ACTUALLY WORKS. But I know we’ll have to abandon CVS soon. It’s a major pain-in-the-buut that CVS cannot handle something as ‘simple’ as a MOVE or RENAME command (unlike, say, StarTeam, which is a dream to use for these types of operations). We just live with it. [I’m sure someone will again say ‘but you can hand-edit those plain text files and do what you want!’ yeah, right…Google ‘cvs move’ and you’ll see a lot of ‘yeah, it can be done BUT…’ and then lots of warnings about screwing everything up.] But, given how refactoring will be so easy and routine in VS.NET 2005, CVS will no longer cut it — daily renaming of files, moving to new folders, etc. must be handled with ease by the underlying VCS. Giving the tight integration of Hatteras and the rest of VSTS, I assume it will have no problems whatsoever with these operations. [Right, MSFT? With full history and roll-back???] Call me crazy, but I had hoped for years and years that MSFT would buy StarBase (makers of StarTeam before Borland snatched it up). I just wanted to see VSS die. (And I can’t believe that’s still not happening…!) Brant – I think you’re absolutely correct. Education is very much critical, and I can concur that my own undergrad and even grad education didn’t include anything about version control – it was always just assumed that it had been explained earlier. In the CVS case, while there are a couple of decent places on "how to use cvs", the sourceforge doc () at least includes a little more theory so I tend to point people there if they’re asking me for CVS primers. I’m sure Hatteras and all of VSTS will have excellent documents both helping explain SCM concepts and specific mechanics. Between Korby (), Rob (), myself, and many others, we’re all working to try and bridge the gap between both those with previous source control experience and those without. Ed – I’m glad to hear you like StarBase so much – I’ve never had the pleasure of using it, but I’ll try to learn as much as I can about it to make sure we make TFS as user-friendly as you find StarBase. WRT the couple of questions/comments: – we branch in namespace, so you "branch source target" and "merge source target" later on (merging just particular changes as desired, doing more merges later, etc.) – I hate(d) how CVS doesn’t do branches as new entries in the namespace, so I’m happy that with TFS if I have a main/ subdir, I can "branch main rel-1.0" and create a branch for my 1.0 release as desired and merge change(set)s in either direction with ease. – renames are supported in TFS as fully versioned operations that are full first-class pended changes, just like a pended add or pended delete (or pended undelete!). They’re shown in history and you can treat them like any other pended change – they increment the item’s version number like any other change (like an edit would). Yes, history is preserved – it’s really the same item before and after rename, not a "clone" like some systems do with a branch/delete operation pair. I don’t know whether this is on-topic or not, partly because I’m not sure exactly what the relationship is between VSTS and SourceSafe 2005. I’d hope that from a reduction-of-duplication standpoint that VSTS uses SourceSafe as its versioning layer, but I don’t want to assume so. I can’t make any specific comments about VSTS itself because I haven’t seen it yet, but I can comment on things that have driven me crazy about SourceSafe for some time. Mostly these are to do with the integration between SourceSafe and Visual Studio. The first is that in 2002 and 2003, basic file management in VStudio breaks when the project is in source control. Want to rename a file? Sorry, don’t use the rename menu item, I can’t handle that (even though SourceSafe itself is fully capable of it), you have to do ‘exclude from project’, ‘show hidden files’, ‘rename’, ‘include in project’, ‘hide hidden files’ or some equivalently silly procedure. (As an aside, having a dialog box that basically says "If you do this, everything will break – do it anyway?" is silly, because many users are trained to just press OK on dialog boxes without reading them.) Want to delete a file? Better fire up the SourceSafe standalone client if you don’t want your SourceSafe repository cluttered up with obsolete crap. The second is that on occasion I’ve seen people be able to drag-drop or cut-paste files into source controlled projects (in VStudio) *without* checking the files out, or any warnings popping up. This should NEVER be possible, obviously. In an environment where Source Code Control is being used, it should *never* be possible to get into a state where something in your local copy disagrees with source control and will be overwritten on the next ‘Get Latest’. The third is the silly magic string that has to be in the project file in order to enable source code control. You know the one I’m talking about – the CZVDAAAA (or whatever) string that identifies the location of the file in SourceSafe. We’ve scripted the creation of our solution files, and we frequently *branch* in SourceSafe, and both of these operations cause problems for VStudio. Basically, it turns out to be impossible to create a valid solution/project combination without this magic string (you’ll get a warning every time you open the project) but there’s no way to get the magic string through scripting. We just tell the first person working on the project to manually "Change source control", which forces VStudio to pick up the magic string and put it in the project file, so our script can pick it up there in future. The fix would either be to stop requiring/using the magic string in the project and solution files, or provide some programmatic way to find that string for a particular project path. Or (better yet) both. Finally, it would save me and my company a HECK of a lot of time if SourceSafe were capable of doing intelligent merging of branches. Okay, for starters, allow merging on a per-project level rather than per-file, but I’ve gotten away with scripting that bit. The important bit is remembering which branches have already been merged, and not giving conflicts for things that were already resolved on the last merge. It needs to be possible to have a baseline branch and (say) a customer development branch, and periodically move everything from the baseline to the customer’s branch, and have SourceSafe know which stuff has already been merged, not have to merge everything back to the beginning of time every time. Or repeatedly merge changes from a release branch back to the trunk. Hopefully all this stuff is already fixed. But you did ask for feedback 😉 Stuart. Wow, Stuart, thanks for the feedback! Because answering these questions is going to be lengthy, and others may find it helpful as well, I’m going to make it another post in my blog (should be posted in about half an hour or so). Short answers (post will be long answers): – not based on SourceSafe at all item 1) we do the right thing (pend rename or delete) item 2) I may be misunderstanding the situation you’re describing, but source controlled files not checked out are read-only. item 3) no magic strings for Hatteras-controlled projects. If you branch/merge on a basis "above" your solution directories, you shouldn’t really need to muck with the project files or solutions at all (most everything is local-relative-path) item 4) Hatteras merging is far more intelligent – already-merged changes won’t be merged again. PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/jmanning/2004/07/11/random-weekend-source-control-babbling/
CC-MAIN-2016-30
refinedweb
2,061
68.1
Getting started with regression and decision trees Regression analysis is one of the approaches in the Machine Learning toolbox. It is widely used in many fields but its application to real-world problems requires intuition for posing the right questions and a substantial amount of “black art” that can't be found in textbooks. While practice and experience are required to develop these intuitions, there are practical steps that you can take to get started. This article provides you with a starting point by showing you how to build a regression model from raw data and assess the quality of its predictions. The article will use code examples in Python. Before running the code make sure you have the following libraries installed: pandas, matplotlib and sklearn. Imagine you are working for a bike sharing company that operates in the area of a specific city. This company has a bike sharing scheme where users are able to rent a bike from a particular location and return it at a different location using the machines at the parking spots. You are in charge of predicting how many bikes are going to be used in the future. This is obviously very important for predicting the revenues of the company and planning infrastructure improvements. Unfortunately, you don’t know much about bike sharing. All you have been given is a csv file where you have the number of bikes hired every day. You can import the file with Python using pandas and check the first five lines: import pandas as pd bikes = pd.read_csv('bikes.csv') bikes.head() Here you notice that not only you have the number of bikes hired for each day, but also the average weather condition in that day. At this point it's worth checking if there's a relation between the weather and the bikes used. Plot the temperature against the bike count to find out: from matplotlib import pyplot as plt plt.figure(figsize=(8,6)) plt.plot(bikes['temperature'], bikes['count'], 'o') plt.xlabel('temperature') plt.ylabel('bikes') plt.show() Scatter plot Bingo! Here you see that there is a clear relation between the number of bikes hired and temperature. You have observed that as the temperature increases, the number of bikes hired generally increases too. However, how can you use this information to predict the number of bikes hired? This is where regression analysis comes in our help. With regression analysis you can capture the relationship between temperature and number bikes hired in a model that you can query any time you need to estimate the number of bikes from the temperature. There are many regression techniques that you can apply; the one that you will use here is called Decision Trees. Why? - It is a rule based technique. The prediction is done by applying a cascade of rules of the type “is the temperature less or equal than x degrees?”. This makes the model easy to interpret. - It doesn’t require any data transformation. It means that we don’t have to spend more time preprocessing the data. - It can handle complex relationships (not only simple linear relationships). sklearn provides an implementation of the Decision Trees which is very straightforward to use: from sklearn.tree import DecisionTreeRegressor import numpy as np regressor = DecisionTreeRegressor(max_depth=2) regressor.fit(np.array([bikes['temperature']]).T, bikes['count']) Here we instantiated the regressor and, calling the method fit, we optimised (in technical terms trained) it for our data. Now you can answer questions such as “how many bikes will be hired when the temperature is 5 degrees?” regressor.predict(5.) array([ 189.23183761]) Or, “how many bikes will be hired with a temperature of 20 degrees?” regressor.predict(20.) array([ 769.08756039]) You can visualise the prediction when temperature varies as follows: xx = np.array([np.linspace(-5, 40, 100)]).T plt.figure(figsize=(8,6)) plt.plot(bikes['temperature'], bikes['count'], 'o', label='observation') plt.plot(xx, regressor.predict(xx), linewidth=4, alpha=.7, label='prediction') plt.xlabel('temperature') plt.ylabel('bikes') plt.legend() plt.show() Decision Tree Regression Here you can note that the prediction increases when the temperature increases. You can also note that the prediction is a stepwise function of the temperature. This is strictly related to how Decision Trees work. A decision tree splits the input features (only temperature in this case) in several regions and assigns a prediction value to each region. The selection of the regions and the predicted value within a region are chosen in order to produce the prediction which best fits the data. Where for best fit we mean that it minimises the distance of the observations from the prediction. You can inspect the set of rules created during the training process by exporting the tree: from sklearn.tree import export_graphviz export_graphviz(regressor, out_file='tree.dot', feature_names=['temperature']) Here we exported the tree in dot format. Visualising it with Graphviz () we have the following result: Decision Tree The rules are organised in a binary tree: each time you ask to estimate the number of bikes hired the temperature is checked with the rules starting from the root of tree till the bottom following the path dictated form the outcome of the rules. For example, if the input temperature is 16.5, the first checked (temperature <= 14.3) will give a negative outcome leading us to its right child node in the next level of the tree (temperature <= 25.8), this time you will have a positive response and you will end in its left child. This node is a leaf, which means that it doesn't contain a rule but the value that you want to predict. In this case, it predicts 769 bikes. Usually decision trees can be much deeper, and the deeper they are, the more complexity they are able to explain. In this case, a two level tree was configured using the parameter max_depth during the instantiation of the model. Here's the notebook with the code and the data. ABOUT THE AUTHOR >>IMAGE:.
https://cambridgespark.com/content/tutorials/getting-started-with-regression-and-decision-trees/index.html
CC-MAIN-2017-43
refinedweb
1,009
55.74
I was installing Zwiki2.0b1 with Zope-2.12.2 and import my own zwiki site.I almost have not any problems. Today, I tried to view my issuetracker and that pages. My zwiki shows blank page without any error messages. I thought probably ir is one of non-ascii characters issues on ZWiki-2.12.2. Then I remove all non-ascii characters catrvoriesin the issue_categories property to find what it do. My zwiki shows the issuetracker and all issue pages complately. Server: Zope setup: Zwiki usage: ... --simon, Tue, 12 Jan 2010 10:14:39 -0800 reply Thanks for the report. Can you send me the url of your issue tracker if it is accessible ? Can you see any error in Zope's error_log, or in event.log (you may need to increase logging verbosity to BLATHER). I'm sorry. It's my misstake --koyoshi, Sat, 16 Jan 2010 00:29:12 -0800 reply Thank you for your advice, and I'm sorry for my wrong report. This isuue was occurred without a file needed for stabilization of using unicode characters. The file is <python-virtualenv>/lib/python2.6/site-packages/sitecustomize.py: import sys sys.setdefaultencoding('UTF-8') This file must be into the site-packages folder, if we tried to see the objects titled into non-ascii characters on the more older zope ZMI. Last year, I installed this file for my Zope2.12.1. But, I thought that I wanted to do it more easy and automatical steps. Recently, I tried to make scripts for the current zope's installing and setup. I was using the scripts to install a lot of times for that test. Now, one of these Installed Zope2.12.2 Instance and it's python's environment. Actually, I wasn't aware that it had not sitecustomize.py. I'm sorry for my mistakes. m(_ _)m And thanks for your help. To halt this mistake, My scripts were fixed for install of current zope. I posted this script file in my blog with my broken English. ^^;; Thanks! Name: '#1464 Can not display issuetracker with Zope-2.12.2 and non-ascii issue_categories' => '#1464 Without sitecustomize.py, not display issuetracker with Zope-2.12.2 and non-ascii issue_categories' Severity: serious => minor Status: open => closed
http://zwiki.org/1464WithoutSitecustomizePyNotDisplayIssuetrackerWithZope2122AndNonAsciiIssueCategories
CC-MAIN-2017-51
refinedweb
384
70.09
Quick Links RSS 2.0 Feeds Lottery News Event Calendar Latest Forum Topics Web Site Change Log RSS info, more feeds Topic closed. 6 replies. Last post 11 years ago by we;reallwinners. one day last week I had two different people give me thier phone numbers and both numbers had the same last 4 digits.... I don't feel confident enough to play the 3 or the 4 digit games, but felt I should try these numbers..... I played the digits straight and boxed and asked for an "easy pick" as a third ticket..... the "easy pick" ticket was the same numbers that I had selected...... is that strange or normal??? be vewyvewy quiet, WewHuntingNumbers Synchronicity. The real question is if you won or not. that is pretty rare...... Hmm....suppose both the phone numbers are in the same area code and the first three digits after the area code are irrelevant. Once you received the first phone number, what are the odds of coincidentally receiving a second similar number form someone else... There are 10,000 unique four digit ending sequences just like in the Pick 4 game (0000-9999). The odds of getting two phone numbers with the same ending digits in the same exact order is 1 in 10,000. If all four digits are the same, but the order is not (1234 & 3124) then the odds are about 1 in 417. If the number contained 1 repeating digit (1134) and the order was not the same (1134 & 3141) then the odds would be about 1 in 833. If the numbers contained two sets of repeating digits (1122) but were not in the same order (1212 & 1122) then the odds are about 1 in 1,667. Of course, there are a few other possibilities left, but your #'s probably fit into one of those scenarios. ...as for the quick pick, I see those machines do alot of strange things. I think many times the next number printed is based on the last one purchased. I bought 3 auto picks in Ohio for the mega millions and had them all on seperate tickets. When I got the three tickets there was a noticable pattern between the three. ~Probability=Odds in Motion~ def strange *reallwinners, yea and most of the time i believe in the "signs" but hey ya neva know so..me, I would keep playin that # till it hits cuz it prob will...GOOD LUCK thanx for the feedback....the combination hasn't hit since the day I got them, but I will keep trying... 5014 were the numbers in case anyone else wants to try.
https://www.lotterypost.com/thread/132570
CC-MAIN-2017-04
refinedweb
440
81.33
based.) Here’s a refactoring example for a simple Ruby math problem using the inject method. The goal is to generate an n by n multiplication matrix. Pretty straightforward. Let’s make a first pass: def multiplication_table(n) results = [] (1..n).each do |row_index| row = [] (1..n).each { |column_index| row << row_index * column_index } results << row end results.each do |row| header = '%-3s ' * row.length puts header % row end end multiplication_table(7) 1 2 3 4 5 6 7 2 4 6 8 10 12 14 3 6 9 12 15 18 21 4 8 12 16 20 24 28 5 10 15 20 25 30 35 6 12 18 24 30 36 42 7 14 21 28 35 42 49 It does the job, but it’s nothing to look at. We can condense using in-line blocks and fancy curly braces, but first let’s try using inject. Quick background: Inject iterates over a set using a block with a collector, which is a variable whose value is passed back each round, making it good for iterative uses. The block’s return value is passed into the collector (make sure you return something or you’ll get a nil). Also, the return value of the whole inject block is the final collector value. Simple example from the docs: (5..10).inject {|sum, n| sum + n } #=> 45 The inject method takes a parameter, which initializes the collector value (default is to set to the first value). You can pass in an empty array and add values: def multiplication_table(n) results = (1..n).map do |row_index| (1..n).inject([]) { |row, n| row + [row_index * n] } end results.each { |row| puts '%-3s ' * row.length % row } end So that’s inject with the iterative collect (Whoo-aaaa - got you all in check) Note: For a more in-depth discussion of inject, read Mike Burns’ recent post. I. I. We? The Gang of Four differentiates these succinctly:. Gang of Four:. Now, the most confusing one:.
http://robots.thoughtbot.com/tagged/Ruby
crawl-003
refinedweb
329
73.78
rich:tree help for displaying folders under a directoryAyhan T Nov 6, 2009 5:31 PM I am just learning rich:tree and have no problem in displaying a directory like a WEB-INF in a tree like structure (actually there is a very good example for this in the documentation). My problem is displaying folders in a tree like structure outside web app, for example displaying all files under C:\temp directory. Is there any reference or example regarding how to display folders and their subfolders (and their subfolders etc) from a directory (outside of WEB app)? I was trying to use java.io to capture directories and files and rich:recursiveTreeNodesAdaptor without any luck so far. Any help would be appreciated. Thank you, 1. Re: rich:tree help for displaying folders under a directoryNick Belaevski Nov 7, 2009 11:22 AM (in response to Ayhan T) Hi, What's the problem with doing that? 2. Re: rich:tree help for displaying folders under a directoryAyhan T Nov 7, 2009 10:41 PM (in response to Ayhan T) Hi, Let's say I have three folders: folder1, folder2, and folder3 in C:\temp directory. I am trying to get the three folders in xhtml using a similar rich:tree snippet like: <h:form> <rich:tree <rich:recursiveTreeNodesAdaptor </rich:tree> </h:form> I am able to present folder1, folder2, and folder3 in tree. (At least I can do this). But when I select any one of them, instead of getting the files and subdirectories of the folder, I am getting folder1,folder2, and folder3 again as subdirectories. I mean if I select folder1 in tree, I see folder1,folder2,folder3 as children elements. When I select one of the children folders, I am again getting folder1, folder2, and folder3. I think I can continue to get the same content till there is an Out Of Memory stack overflow. I think I have a problem in my stateAdvisor. I am using the one provided in one of your demos. public class TreeDemoStateAdvisor implements TreeStateAdvisor { public Boolean adviseNodeOpened(UITree tree) { if (!PostbackPhaseListener.isPostback()) { Object key = tree.getRowKey(); TreeRowKey treeRowKey = (TreeRowKey) key; if (treeRowKey == null || treeRowKey.depth() <= 2) { return Boolean.TRUE; } } return null; } public Boolean adviseNodeSelected(UITree tree) { return null; } } I think the problem is I cannot get a relation between my data model and UI Tree, and I keep getting the same main folders as children folders over and over. Any help would be appreciated greatly. Thanks, 3. Re: rich:tree help for displaying folders under a directoryNick Belaevski Nov 8, 2009 6:03 AM (in response to Ayhan T) Hi, It's easy to check whether it's the thing really causing problems - just remove "stateAdvisor" attribute.
https://developer.jboss.org/thread/17769
CC-MAIN-2018-05
refinedweb
456
62.07
TechRepublic's free .NET newsletter, delivered each Wednesday, contains useful tips and coding examples on topics such as Web services, ASP.NET, ADO.NET, and Visual Studio .NET. Automatically sign up today! When Windows developers need a unique value, they often utilize a Globally Unique Identifier (GUID). Microsoft uses the term GUID for a unique number that identifies an entity, such as a Word document. A GUID is a 128-bit integer (16 bytes) that you can use across all computers and networks wherever a unique identifier is required. There's a very low probability that this type of identifier will be duplicated. This article explains how the .NET Framework makes it possible for you to create your own GUID. Everywhere you look GUIDs are used throughout the Windows environment. When you peruse the registry on a Windows system, you can see that GUIDs are used extensively to uniquely identify applications and so forth. In particular, they serve as application IDs under the HKEY_CLASSES_ROOT section (AppID key). This is the format of a typical GUID: 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 Generating a GUID with .NET Possessing a unique identifier makes it easy to store and retrieve information. This is especially useful when working with a database because a GUID makes an excellent primary key. Also, SQL Server integrates well with GUID usage. The SQL Server data type uniqueidentifier stores a GUID value. You can either generate this value within SQL Server—using the NEWID() function—or you can generate the GUID outside of SQL Server and insert it manually. The latter approach is a straightforward process in .NET. The base System class in the .NET Framework includes the GUID value type. In addition, this value type includes methods to work with GUID values. In particular, the NewGUID method allows you to easily generate a new GUID. The following C# command-line application shows how it's used: using System; namespace DisplayGUID { class GuidExample { static void Main(string[] args) { GenerateGUID(); } static void GenerateGUID() { Console.WriteLine("GUID: " + System.Guid.NewGuid().ToString()); } } } Here's the output of the program (though the GUID will vary from system to system): GUID: 9245fe4a-d402-451c-b9ed-9c1a04247482 This is the same code using VB.NET: Module BuilderExamples Sub Main() GenerateGUID() End Sub Public Sub GenerateGUID() Console.WriteLine("GUID: " + System.Guid.NewGuid().ToString()) End Sub End Module And here's the same code example using J#: package BuilderExamples; import System.Console; public class GUIDExample { public GUIDExample() { } public static void main(String[] args) { GenerateGUID(); } static void GenerateGUID() { Console.WriteLine("GUID: " + System.Guid.NewGuid().ToString()); } } The example code uses the NewGuid function of the System.Guid namespace to return a value. (If you ever had to do this in Visual Basic, you'll appreciate the simplicity of this approach.) At this point, you may be thinking that GUIDs are a nice feature, but how and where would you use them in your applications? Using a GUID in your application A GUID makes a great primary key in a back-end database. The example in Listing A uses a GUID to store information in a back-end database, which has the following columns: pk_guid (uniqueidentifier data type) and name (nvarchar data type). A simple Windows form is presented with one text box. The data from the text box is inserted into the database table when a button is selected. A GUID is generated by the application code and stored in the other column. Another GUID application is assigning a unique identifier to a .NET class or interface; that is, the GUID is assigned as an attribute for the class or interface. This is accomplished using the standard attribute syntax. We can extend our first example to assign a GUID to it. The System.Runtime.InteropServices namespace must be referenced to utilize the GUID attribute. The following C# code does that: using System; using System.Runtime.InteropServices; namespace DisplayGUID { [Guid("9245fe4a-d402-451c-b9ed-9c1a04247482")] class GuidExample { static void Main(string[] args) { GenerateGUID(); } static void GenerateGUID() { Console.WriteLine("GUID: " + System.Guid.NewGuid().ToString()); } } } GUIDs have never been easier As with most aspects of development, the .NET Framework simplifies the process of creating and working with GUID values. This makes it easy to generate unique values where necessary in your .NET.
http://www.techrepublic.com/article/generating-and-working-with-guids-in-net/
CC-MAIN-2017-22
refinedweb
709
57.37
02 March 2011 23:38 [Source: ICIS news] HOUSTON (ICIS)--Styron is seeking a price increase on ?xml:namespace> The price increase initiative will be for 7 cents/lb ($154/tonne, €113/tonne) effective on 1 April, or as contracts allow. The company also announced a price increase of 9 cents/lb for high-heat resistant ABS. The price increase nomination comes on the heels of another announcement from INEOS ABS, which is seeking a price increase of 5 cents/lb on ABS and 9 cents/lb on high-heat resistant ABS, also effective on 1 April. Price increase initiatives on ABS that took effect in March, on average for 6 cents/lb, found traction, buyers and distributors said. Demand for ABS has been described as strong to healthy, especially relative to the time of year, when construction orders are thin because of colder weather. In the Several producers said that record ABS prices in Asia will also lend strength to upcoming The recent price increase nominations have been sparked mostly by rising feedstock costs in the Major ($1 = €0.73) To learn more about
http://www.icis.com/Articles/2011/03/02/9440288/styron-seeking-7-centlb-price-hike-on-us-abs-on-higher-feeds.html
CC-MAIN-2015-06
refinedweb
185
58.11
15 March 2011 17:53 [Source: ICIS news] PRAGUE (ICIS)--Italy's Eni and Russia's Gazprom are discussing a deal for a stake in a refiner that could impact on the petrochemical expansion strategy of the Czech Republic's Unipetrol, a source at Unipetrol said on Tuesday. " Eni had informed Unipetrol that it was in talks to sell its 32.5% stake in main Czech refiner Ceska Rafinerska, he added. Unipetrol holds a 51% stake in Ceska Rafinerska, but production sharing agreements with the other shareholders limit the amount of output, including petrochemical feedstock, that it can source from its two refineries. ?xml:namespace> "We can theoretically block any deal between Eni and Gazprom because we have option of first refusal on the stake, but we would need to put in a better bid than Gazprom to do so," he added. Czech energy security officials remain wary of allowing energy powerbroker Russia into the Czech Republic's refining sector. When in 2009, Royal Dutch Shell let it be known its 16% stake in Ceska Rafinerska might be divested if a good bid was received, Czech state-run pipeline operator MERO suggested it should consider placing a bid to prevent a Russian company acquiring it. Ceska Rafinerska runs a refinery in Litvinov, a location near the northern border with Germany where Unipetrol's main petrochemical units are also based, and a small refinery in Kralupy nad Vltavou, near Prague. "We have long looked at the possibility of building up our shareholding in the company, partly to give us options on more petchem feedstock,"
http://www.icis.com/Articles/2011/03/15/9444194/eni-gazprom-refinery-deal-could-hit-unipetrols-expansion-strategy.html
CC-MAIN-2014-52
refinedweb
263
51.41
Other resources from O’Reilly Related titles Essential PHP Security Learning PHP 5 Learning MySQL Mastering Regular Expressions MySQL Cookbook ™ MySQL in a Nutshell MySQL Pocket Reference PHP Cookbook ™ PHP Hacks ™ Programming PHP Web Database Applications with PHP and MySQL,pro- gramming languages, and operating systems. seconds. Read the books on your Bookshelf from cover to cover or sim- ply flip to the page you need. Try it today for free. Learning PHP and MySQL SECOND EDITION Michele E. Davis and Jon A. Phillips Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo Learning PHP and MySQL, Second Edition by Michele E. Davis and Jon A. Phillips Copyright © 2007, 2006 Michele E. Davis and Jon A.: Simon St.Laurent Production Editor: Marlowe Shaeffer Copyeditor: Reba Libby Proofreader: Sohaila Abdulali Indexer: Ellen Troutman Zaig Cover Designer: Karen Montgomery Interior Designer: David Futato Illustrator: Jessamyn Read Printing History: June 2006:First Edition. August 2007:Second Edition. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media,Inc.Learning PHP and MySQL,the image of kookaburra. This book uses RepKover ™ , a durable and flexible lay-flat binding. ISBN-10: 0-596-51401-8 ISBN-13: 978-0-596-51401-3 [M] v Table of Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1.Dynamic Content and the Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 HTTP and the Internet 1 PHP and MySQL’s Place in Web Development 2 The Components of a PHP Application 4 Integrating Many Sources of Information 7 Requesting Data from a Web Page 11 2.Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Developing Locally 15 Working Remotely 35 3.Exploring PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 PHP and HTML Text 39 Coding Building Blocks 43 4.PHP Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Expressions 62 Operator Concepts 64 Conditionals 71 Looping 77 5.Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Calling Functions 87 Defining Functions 89 Object-Oriented Programming 96 vi | Table of Contents 6.Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Array Fundamentals 107 7.Working with MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 MySQL Database 122 Managing the Database 125 Using phpMyAdmin 126 Database Concepts 131 Structured Query Language 132 8.Database Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Database Design 146 Backing Up and Restoring Data 155 Advanced SQL 159 9.Getting PHP to Talk to MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 The Process 180 Querying the Database with PHP Functions 180 Using PEAR 190 10.Working with Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Building a Form 199 Templates 218 11.Practical PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 String Functions 223 Date and Time Functions 233 File Manipulation 238 Calling System Calls 249 12.XHTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Why XHTML? 253 XHTML and XML Namespaces 254 XHTML Versions 254 Generating XHTML with PHP 261 13.Modifying MySQL Objects and PHP Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Changing Database Objects from PHP 263 Manipulating Table Data 266 Displaying Results with Embedded Links 267 Table of Contents | vii Presenting a Form to Add and Process in One File 270 Updating Data 276 Deleting Data 277 Performing a Subquery 282 14.Cookies, Sessions, and Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Cookies 285 PHP and HTTP Authentication 288 Sessions 294 Using Auth_HTTP to Authenticate 301 15.Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Session Security 316 16.Validation and Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Validating User Input with JavaScript 325 Pattern Matching 329 Redisplaying a Form After PHP Validation Fails 333 17.Sample Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Configuration File 340 Page Framework 340 Database 343 Displaying a Postings Summary 346 Displaying a Posting and Its Comments 349 Adding and Changing Posts 352 Adding and Changing Comments 358 18.Finishing Your Journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 PHP Coding Standards 366 PEAR 371 Frameworks 372 Ajax 373 Wikis 373 Finding Help on the Web 373 Appendix. Solutions to Chapter Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 ix Preface 1 PHP and MySQL are a powerful combination that makes it easy to create web appli- cations.If you’ve been creating web pages but want to build more sophisticated sites that can grow and interact with users,PHP and MySQL let you get started easily and then build complex applications on those foundations. Our goal is to help you learn the ins and outs of PHP and MySQL and to save you some of the “Why doesn’t that work?” moments that we’ve already been through. We’ll show you what to watch for and how to fix these issues without pulling out your hair. Audience This book is for people who want to know how to create dynamic web sites.That could include graphic designers who are already working in an IT or advertising firm creating static web sites,and who may need to move forward with coding database- driven web sites.It might also include people who already know,say,Flash develop- ment and HTML markup,but need to expand their repertoire of skills to databases and programming. Assumptions This Book Makes This book assumes you understand how web browsers work and have a basic under- standing of HTML.Some understanding of JavaScript may be useful (for Chapter 16) but isn’t generally required. You might also be overqualified.If you already know how to create pages using MySQL and PHP,then you’d probably be better off with a book that is more a refer- ence than a learning book,such as Paul Hudson’s PHP in a Nutshell,or Russell Dyer’s MySQL in a Nutshell, both from O’Reilly. x | Preface Organization of This Book This book starts out with an overview of how all of the pieces you’ll be working with fit together.Because there are multiple languages and technologies that interact to form dynamic web pages,it’s best to start with a solid understanding of how the pieces work together.The PHP that you’ll learn works as an integration package for dynamic web sites. Next,we’ll walk through installing the core software packages on your local com- puter.This book focuses on PHP and MySQL,but making this work also usually requires the Apache web server.The PHP interpreter works with the web server when processing dynamic content.Finally,you’ll install the MySQL database.Instal- lation is covered for PC,Mac,and Linux systems.You can also use a hosted Internet service provider (ISP) account to develop your pages,if you don’t want to install everything locally. Since PHP plays an important role in pulling everything together,we next explain the basics of working with the PHP language.This includes language essentials such as data types,program flow logic,and variables.Functions,arrays,and forms each get their own chapter to fully explore them. Because you may be new to databases in general,we ease into MySQL by first explaining concepts that apply to designing and using any relational database.Then we give specific examples of using MySQL to interact with your data.Once you can get data in and out of the database,you’ll need to work with PHP to integrate that data into your dynamic content. Security and access control get their own chapters.While security may sound like a dull subject,it’s still a huge issue if you store any private information on your web page. We’ll guide you around several common security pitfalls. We also touch on how XHTML,the next generation of HTML,works with PHP and your web sites. Finally,we close with sample applications that demonstrate how the technologies work together to rapidly build workable,fast web sites.You’ll also be provided with web sites and forums to gain additional information on the topics covered in the book. Supporting Books Even if you feel you are ready for this book,you may want to explore some of the technologies in greater depth than is possible here.The following list offers some good places to start: • Run Your Own Web Server Using Linux & Apache,by Tony Steidler-Dennison (SitePoint). • PHP in a Nutshell, First Edition, by Paul Hudson (O’Reilly). Preface | xi • MySQL in a Nutshell, First Edition, by Russell Dyer (O’Reilly). • CSS Cookbook, Second Edition, by Christopher Schmitt (O’Reilly). There are also several good online resources for dynamic web development,including of the O’Reilly Network.LAMP stands for Linux,Apache, MySQL, PHP. LAMP is the de facto standard for serving dynamic web pages. Conventions Used in This Book The following font conventions are used in this book: Italic Indicates pathnames,filenames,and programnames;Internet addresses,such as domain names and URLs; and new items where they are defined. Constant width Indicates command lines;names and keywords in programs,including method names,variable names,and class names;HTML element tags;values;and data- base engines. Constant width italic Indicates text that should be replaced with user-supplied values. Constant width bold Indicates emphasis in program code lines and user input options that should be typed verbatim. This icon signifies a tip, suggestion, or general note. This icon indicates a warning or caution. Using Code Examples This book is here to help you get your job done.In general,you can use the code in this book in your programs and documentation.You do not need to contact O’Reilly for permission unless you’re reproducing a significant portion of the code.For exam- ple,writing a program that uses several chunks of code from this book does not require permission.Selling or distributing a CD-ROM of examples from O’Reilly books does require permission.Answering a question by citing this book and quot- ing example code does not require permission.Incorporating a significant amount of example code from this book into your product’s documentation does require permission. xii | Preface We appreciate,but do not require,attribution.An attribution usually includes the title,author,publisher,and ISBN.For example:“Learning PHP and MySQL,Second Edition,by Michele E.Davis and Jon A.Phillips.Copyright 2007 Michele E.Davis and Jon A. Phillips, 978-0-596-51401-3.” If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact the publisher at permissions@oreilly.com. How to Contact Us We have tested and verified the information in this book to the best of our ability, but mistakes and oversights do occur addi- tional information. You can access this page at: There is also a blog for this book located at: To comment or ask technical questions about this book, send email to: bookquestions@oreilly.com For more information about our books,conferences,Resource Centers,and the O’Reilly Network, see our web site at: informa- tion. Try it for free at. Preface | xiii Acknowledgments We are happy to have this newly improved and expanded Second Edition out for our audience.We’d like to thank our wonderful agent,Matt Wagner of Fresh Books, along with Simon St.Laurent at O’Reilly for getting this Second Edition rolling;with- out them, this book wouldn’t be in your hands. Second,profuse thanks to our technical editors,especially Jereme Allen,Charlie Maguire,and Peter MacIntyre for their fantastic edits to our book.We’d also like to thank our local Minneapolis/St.Paul PHP community: sparked our interest in PHP and MySQL years ago.Lastly,thanks to Simon,Mimi, and Zack for being patient while their parents reworked a very important book. 1 Chapter 1 CHAPTER 1 Dynamic Content and the Web 1 To the average user,a web page is a web page.It opens in the browser and provides information.Looking closer,though,some pages stay mostly the same,while other pages change regularly.Pages that don’t change—static pages—are relatively simple to create.Someone has to create an HTML document,by hand or with tools,and upload it to a site where web browsers can visit.One of the most common tools to create HTML documents is Adobe Dreamweaver.When changes are needed,you just replace the old file with a new one.Dynamic pages are also built with HTML, but instead of a simple build-and-post approach,the pages are updated regularly, sometimes every time that they are requested. Static sites provide hyperlinked text and perhaps a login screen,but beyond that, they don’t offer much interaction.By contrast,Amazon.com() demonstrates much of what a dynamic web site can do:your ordering data is logged, and Amazon offers recommendations based on your purchasing history when you access their page.In other words,dynamic means that the user interacts with the web site beyond just reading pages,and the web site responds accordingly.Every page is a personalized experience. Creating dynamic web pages—even a few years ago—meant writing a lot of code in the C or Perl languages,and then calling and executing those programs through a process called a Common Gateway Interface (CGI).Having to create executable files wasn’t much fun,and neither was learning a whole new complicated language. Thankfully, PHP and MySQL make creating dynamic web sites easier and faster. HTTP and the Internet Some basic understanding of how the Internet works may be useful if you haven’t programmed for the Web before.The HyperText Transfer Protocol (HTTP) defines how web pages are transferred across the Internet.HTTP is the method used to transfer or convey information on the World Wide Web.Its original purpose was to provide a way to publish and retrieve HTML pages. 2 | Chapter 1:Dynamic Content and the Web The World Wide Web Consortium (W3C) and the Internet Engineering Task Force coordinated the development of HTTP,which is a request-and-response protocol that connects clients and servers.The originating client,usually a web browser,is referred to as the user agent.The destination server,which stores or creates resources and can contain HTML files and images,is called the origin server.Between the user agent and origin server, there may be several intermediaries, such as proxies. An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a remote host (port 80 is the default).An HTTP server listening on that port waits for the client to send a request message. Upon receiving the request,the server sends back a status line,like “HTTP/1.1 200 OK,” and its own response.Depending on the status,this response could be the requested file, an error message, or some other information. HTTP is built on top of TCP,which is itself layered on top of Internet Protocol (IP). The two are often referred to together as TCP/IP.Applications on networked hosts can use TCP to create connections to one another,and then exchange streams of data.The protocol guarantees reliable delivery of data from sender to receiver.TCP supports many of the Internet’s most popular application protocols and applica- tions, including the Web, email, and Secure Shell (SSH). PHP and MySQL’s Place in Web Development PHP is a programming language designed to generate web pages interactively on the computer serving them,which is called a web server.Unlike HTML,where the web browser uses tags and markup to generate a page,PHP code runs between the requested page and the web server, adding to and changing the basic HTML output. PHP makes web development easy because all the code you need is contained within the PHP framework.This means that there’s no reason for you to reinvent the wheel each time you sit down to develop a PHP program;it comes with web functionality built-in. While PHP is great for web application development,it doesn’t store information by itself.For that,you need a database.The database of choice for PHP developers is MySQL,which acts like a filing clerk for PHP-processed user information.MySQL automates the most common tasks related to storing and retrieving specific user information based on your supplied criteria. Consider the Amazon.com example:the recommendations Amazon offers are based on a database that records your prior order information. MySQL is easily accessed from PHP,and they work well together.An added benefit is that PHP and MySQL run on various computer types and operating systems, including Mac OS X, Windows-based PCs, and Linux. PHP and MySQL’s Place in Web Development | 3 Advantages of Using PHP with MySQL There are several factors that make using PHP and MySQL together a natural choice: PHP and MySQL work well together PHP and MySQL have been developed with each other in mind,so they are easy to use together.The programming interfaces between them are logically paired up.Working together wasn’t an afterthought when the developers created the PHP and MySQL interfaces. PHP and MySQL have open source power As they are both open source projects,PHP and MySQL can both be used for free.MySQL client libraries are no longer bundled with PHP.Advanced users have the ability to make changes to the source code,and therefore change the way the language and programs work. PHP and MySQL have community support Both tools active communities on the Web in which you can participate,and the participants will help you answer your questions.You can also purchase profes- sional support for MySQL if you need it. PHP and MySQL are fast Their simple and efficient designs enable faster processing. PHP and MySQL don’t bog you down with unnecessary details You don’t need to know all of the low-level details of how the PHP language interfaces with the MySQL database,as there is a standard interface for calling MySQL procedures from PHP.Online application programming interfaces (APIs) at offer unlimited resources. The Value of Open Source As we mentioned above,both PHP and MySQL are open source projects,so you don’t need to worry about buying user licenses for every computer in your office or home.When using.Most open source licenses include the right to distribute modified code with some restrictions. For example,some licenses require that derivative code must also be released under the same license, or there may be a restriction that others can’t use your code. As Tim O’Reilly puts it,“Open source licensing began as an attempt to preserve a culture of sharing,and only later led to an expanded awareness of the value of that sharing.” Today,open source programmers share their code changes on the Web via web sites.If you’re caught in a coding nightmare and can’t wake up, the resources mentioned previously can and will help you. 4 | Chapter 1:Dynamic Content and the Web We’ll arm you with open source user forums later in this book so you can check them out yourself.We’ll include listservs and web sites so that you have numerous resources if you run into a snafu. The Components of a PHP Application In order to process and develop dynamic web pages,you’ll need to use and under- stand several technologies.There are three main components of creating dynamic web pages:a web server,a server-side programming language,and a database.It’s a good idea to have an understanding of these three basic components for web devel- opment using PHP.We’ll start with some rudimentary understanding of the history and purpose of Apache (your web server),PHP (your server-side programming lan- guage),and MySQL (your database).This can help you to understand how they fit into the web development picture. Remember that dynamic web pages pull information from several sources simulta- neously,including Apache,PHP,MySQL,and Cascading Style Sheets (CSS),which we’ll talk about later. PHP) and Sun’s Java Server Pages (JSP).PHP also is an interpreted language,rather than a compiled one.The real beauty of PHP is simplicity coupled with power. Compiled languages create a binary file such as an.exe,while inter- preted every- one.The creators of PHP developed an infrastructure that allows experienced C pro- grammers to extend PHP’s abilities.As a result,PHP now integrates with advanced technologies like XML,XSL,and Microsoft’s Component Object Model Technolo- gies (COM). The Components of a PHP Application | 5 Apache containing the PHP language code. Apache is not the only web server available.Another popular web server is Microsoft’s Internet Information Services (IIS),which is supplied with Windows 2000 and all later versions.Apache has the decided advantages of being free,provid- ing full source code,and using an unrestricted license.Apache 2.0 is the current ver- sion you would most likely be using,though 1.3 is often still used designed to serve up HTML files,they need a way to know how to process PHP code.Apache uses modules to load exten- sions into its functionality.IIS uses a similar concept called Internet Server Applica- tion Program Interface :1.3 and 2.Apache 2 is a major rewrite and supports threading.Threads allow a single process to manage more than one thing at a time.This increases speed and reduces the resources needed.Unfortu- nately,PHP isn’t totally compatible with threading yet.Apache 2 has been out long enough to be considered stable for use in development and production environ- ments. Apache 2 also supports more powerful modules.Some additional modules can be found at module DLLs that don’t come with the official Apache source files,such as mod_php4,mod_ ssl, mod_auth_mysql, and mod_auth_ntsec, can be found on the Web. Apache also has the advantage of being able to run on operating systems other than Windows,which now brings us to the subject of compatibility.But first we’ll give you a little more in-depth coverage of relational databases and SQL. 6 | Chapter 1:Dynamic Content and the Web SQL and Relational Databases Structured Query Language (SQL) is the most popular language used to create, retrieve,update,and delete data from relational database management systems.A relational database conforms to the relational model and refers to a database’s data and schema.The schema is the database’s structure of how data is arranged.Common usage of the term “Relational Database Management System” technically refers to the software used to create a relational database, such as Oracle or Microsoft SQL Server. A relational database is a collection of tables,but other items are frequently consid- ered part of the database,as they help organize and structure the data in addition to forcing the database to conform to a set of requirements. MySQL MySQL is a free yet full-featured relational database.MySQL was developed in the 1990s to fill the ever-growing need for computers to manage information intelli- gently.The original core MySQL developers were trying to solve their needs for a database by using mSQL,a small and simple database.It become clear that mSQL couldn’t solve all the problems they wanted it to,so they created a more robust data- base that turned into MySQL. MySQL supports several different database engines.Database engines determine how MySQL handles the actual storage and querying of the data.Because of that,each storage engine has its own set of abilities and strengths.Over time,the database engines available are becoming more advanced and faster.Table 1-1 lists when vari- ous features have been added to MySQL. Table 1-1.Major MySQL releases Version Features 3.23 The MyISAM database engine is added and is the default engine. It handles large amounts of data efficiently. The InnoDB database engine debuts for transaction safe database processing and support for foreign keys.Foreign keys allow the relationships between tables to be explicitly designated in the database. 4.0 Queries support unions. Unions allow merging the results of two queries into one result. Configuration changes can be made without restarting the database. 4.1 A command is included for the database client. There is support for unnamed views, also known as subqueries. Unnamed views allow you to treat a query like a separate table within a query. There is support for Unicode character sets (local languages). 5.0 Database triggers, stored procedures, constraints, and cursors are added. A trigger allows code to run in the data- base when a triggering event occurs, such as inserting data into a table. Stored procedures allow programs to be defined and executed within the database. Constraints are used to define rules for when rows can be added or modified in the database. Cursors allow code in the database to be run for each row that matches a query. Integrating Many Sources of Information | 7 The current production release of MySQL is the 5.0x version.MySQL 5.0 provides performance that is comparable to any of the much more expensive enterprise data- bases such as Oracle,Informix,DB2 (IBM),and SQL Server (Microsoft).The devel- opers have achieved this level of performance by leveraging the talents of many open source developers,along with community testing.For general web-driven database tasks, the default MyISAM database engine works perfectly fine. The newest advanced features of MySQL 5.1 are not as stable as fea- tures introduced in prior releases.MySQL 5.0 is the current stable general release.Download the latest minor release (the largest of the third portion of the version number) for whichever major version you choose. It has the most bug fixes for that version included. Don’t worry too much about the latest and greatest features,as the bulk of what you’ll probably need has been included in MySQL for a very long time. Compatibility Web browsers such as Safari,Firefox,Netscape,and Internet Explorer are made to process HTML,so it doesn’t matter which operating system a web server runs on. Apache,PHP,and MySQL support a wide range of operating systems (OS),so you aren’t restricted to a specific OS on either the server or the client.While you don’t have to worry much about software compatibility,the sheer variety of file formats and different languages that all come together does take some getting used to. Integrating Many Sources of Information In the early days of the Web,life was simple.There were files that contained HTML, and binary files such as images.Several technologies have since been developed to organize the look of web pages.For example,Cascading Style Sheets (CSS) pull pre- sentation information out of your HTML and into a single spot so that you can make formatting changes across an entire set of pages all at once;you don’t have to manu- ally change your HTML markup one HTML page at a time. You can potentially have information coming from HTML files that reference CSS, PHP templates,and a MySQL database all at once.PHP templates make it easier to 5.1 Partitioning, Scheduling, a Plug-in API, and Row-based replication are added. Partitioning is used to split up the physical storage of large tables based on a defined rule. It’s commonly used to increase the performance of large tables such as older data that is considered historical. Scheduling allows for database code to be executed at defined times. The plug-in API paves the way to add and remove functionality to the MySQL server without restarting it. Row-based replication copies data from one server to another at the row level. Table 1-1.Major MySQL releases (continued) Version Features 8 | Chapter 1:Dynamic Content and the Web change the HTML in a page when it contains fields populated by a database query. We’ll take a quick look at how these pieces come together. Just to give you a taste of what your code will look like,Example 1-1 shows MySQL code called from PHP for inserting a comment into a MySQL database.This exam- ple contains PHP code that generates HTML from a MySQL database,and that HTML itself refers to a CSS stylesheet. Example 1-1.A PHP function to insert a comment into a comments database table <?php //A function to insert a comment into a comments table based on //the $comment parameter. //The database name is also a parameter function add_comment($comment,$database){ // Add a comment // As a security measure, escape any special characters in the user_name. $comment=mysql_real_escape_string($comment); // This is the SQL command $sql_insert = "INSERT INTO `comments` (body) VALUES ('$comment')"; // Select the database mysql_select_db($database); $success = mysql_query($sql_insert) or die(mysql_error( )); // print the page header print(' <html> <head> <title>Remove User</title> <link rel="stylesheet" type="text/css" href="example.css" /> </head> <body> <div class="comments">'); // Check to see if the insert was successful if ($success){ // Tell the user it was successful print("The comment $comment was inserted successfully."); } else { // Tell the user it was not successful print("The comment $comment could not be inserted. Please try again later."); } // Print the page footer print('</div></body></html>'); } ?> Integrating Many Sources of Information | 9 Don’t worry about understanding precisely what’s happening in Example 1-1.The idea is simply to realize that there’s PHP code,database code,and a link to a stylesheet. To simplify the maintenance of sites that have many different pages,but all share a common look,the header and footer of each page can be placed in a separate file and included in each PHP page.This allows changes to be made to the header or footer in one location that change the look of every page automatically.This frees the devel- oper using the Smarty template engine for- mat.The template engine is required to substitute the values into the template. Smarty is discussed in Chapter 10. When the template engine processes the page,the placeholders are replaced with their associated values, as shown in Example 1-3. Example 1-2.A PHP Smarty template <html> <head> <title>My Books</title> </head> <body> <p>Favorite Books:</p> <p> Title: {$title}<br /> Author: {$author} </p> </body> </html> Example 1-3.The resulting HTML code after template substitution and processing <html> <head> <title>My Books</title> </head> <body> <p>Favorite Books:</p> <p> Title:Java in a Nutshell<br /> Author:Flanagan </p> </body> </html> 10 | Chapter 1:Dynamic Content and the Web shown here,CSS,also comes from a desire to separate the presentation styles such as colors and spacing from the core content. Cascading Style Sheets (CSS) supplements HTML to give web developers and users more control over the way their web pages display.Designers and users can create stylesheets that define how different elements,such as headers and links,appear on the web site.The termcascading derives fromthe fact that multiple stylesheets at dif- ferent levels can be applied to the same web page with definitions inheriting from one level to the next web sites!</h3> <h4>It's cool, it's amazing, it even saves you time!</h4> <p>Isn't this <b>nifty</b>?</p> </body> </html> In the CSS,you can either designate a color by naming it,as we did here with the background designation,“ background:yellow ”,or you can assign it with a numeric color code,as we did here,“ color#80D92F ”.The code that begins with style is the CSS code. The document renders as shown in Figure 1-1. Although we include the CSS in the file in this example,it could come from a sepa- rate file as it did in Example 1-1, where it was referenced as user_admin.css. For more information on CSS,see Eric Meyer’s Cascading Style Sheets: The Definitive Guide (O’Reilly). Of course, we also have plain old HTML files in the mix. HTML markup applies tags to content to identify information that is of a particular type or that needs special formatting.HTML tags are always enclosed in angle brack- ets ( <> ) and are case-insensitive;so,it doesn’t matter whether you type in upper- or Requesting Data from a Web Page | 11 lowercase (though XHTML recommends all lowercase).But really,it’s a matter of style.We use uppercase in our web sites so we can see the HTML better and put a carriage return between each markup line the element applied to it.In the earlier example,the text “Learn how to use CSS on your web sites!” is contained by an h3 element: <h3>Learn how to use CSS on your web sites!<> italic</i></b> ,you should close the code like this: </b></i> .) In other words,you should open and close items at the same level.So,if you open a bold and then italic, you should close the italic before you close the bold. Requesting. Figure 1-1.CSS and HTML displayed in your browser 12 | Chapter 1:Dynamic Content and the Web Processing PHP on the server is called server-side processing.When you request a web page,you trigger a whole chain of events.Figure 1-2 illustrates this interaction between your computer and the web server, which is the host of the web site. Here’s the breakdown of Figure 1-2: 1.You enter a web page address in your browser’s location bar. 2.Your browser breaks apart that address and sends the name of the page to the web server.For example, would request the page directory.html from. 3.A programon the web server,called the web server process,takes the request for directory.html and looks for this specific file. 4.The web server reads the directory.html file from the web server’s hard drive. 5.The web server returns the contents of directory.html to your browser. 6.Your web browser uses the HTML markup that was returned from the web server to build the rendition of the web page on your computer screen. The HTML file called directory.html (requested in Figure 1-2) is called a static web page: Figure 1-2.While the user only types in a URL and hits Enter, there are several steps that occur behind the scenes to handle that request Your computer Word Web host Web server process Hard disk Request Internet Request Response Response 1 6 2 3 4 5 Requesting Data from a Web Page | 13 1.You enter a web page address in your browser’s location bar. 2.Your browser breaks apart that address and sends the name of the page to the host.For example, requests the page login.php from. 3.The web server process on the host receives the request for login.php. 4.The web server reads the login.php file from the host’s hard drive. 5.The web server detects that the PHP file isn’t just a plain HTML file,so it asks another process—the PHP interpreter—to process the file. 6.The PHP interpreter executes the PHP code that it finds in the text it received from the web server process.Included in that code are calls to the MySQL data- base. 7.PHP asks the MySQL database process to execute the database calls. 8.The MySQL database process returns the results of the database query. 9.The PHP interpreter completes execution of the PHP code with the data from the database and returns the results to the web server process. 10.The web server returns the results in the form of HTML text to your browser. 11.Your web browser uses the returned HTML text to build the web page on your screen.. Figure 1-3.The PHP interpreter, MySQL, and the web server cooperate to return the page Your computer Word Web host Web server Hard disk Request Internet Request Response Response 1 11 2 5 10 PHP interpreter MySQL 9 6 7 8 4 3 14 | Chapter 1:Dynamic Content and the Web When developing dynamic web pages,you work with a variety of variables and server components,which are all important to having an attractive,easy-to-navigate, and maintainable web site.In Chapter 2 we show you how to install the three major cogs needed to make this work: Apache, PHP, and MySQL. Chapter 1 Questions Question 1-1 What three components do you need to create a dynamic web page? Question 1-2 What does Apache use to load extensions? Question 1-3 What does SQL (as in MySQL) stand for? Question 1-4 What are angle brackets ( <> ) used for? Question 1-5 What does the PHP Interpreter do? See the “Chapter 1” section in the Appendix for the answers to these questions. 15 Chapter 2 CHAPTER 2 Installation 2 Developers working with PHP and MySQL often find it more convenient to work on a local computer rather than a remote web server.In general,it is also safer to create and test your applications on a local—preferably private—computer and then deploy them to a public server where others can enjoy your work.Typically,you need to install Apache,PHP,and MySQL on the local computer,while your ISP handles installation on the public server. Developing Locally Developing your web applications on your local computer is a good way to learn, because you can interact with all of the components on your own machine and not risk causing problems on a production server.That way,if there are problems in the local environment,you can fix them immediately without exposing them to your site’s visitors.Working with local files means that you don’t have to FTP them to a server,you don’t have to be connected to the Internet,and you know exactly what’s installed, since you did it yourself. There are three components to install: • Apache • PHP • MySQL You need to install the programs in that order.All our examples will be from the installation perspective of a PC with Windows installed,with notes for Macintosh and Linux systems. 16 | Chapter 2:Installation The easiest way to install Apache,PHP,and MySQL on most Linux systems is to download a packaged distribution.All popular Linux dis- tributions have prebuilt packages fromApache,PHP,and MySQL.For example,Redhat Linux uses .rpm packages,while Debian uses .deb packages.Consult your distribution’s installation instructions for installing additional packages.Many Linux distributions install Apache,PHP,and MySQL by default,so you may not even need to install them. If this looks too daunting, try XAMPP. Bundled or Full Installations When just starting out,it can be easier to install a bundled set of Apache,MySQL, phpMyAdmin,and PHP.There are several packages available that install all of these at the same time as a single installer within one directory on your computer.These packages also provide a control panel to start and stop individual components and administer them.In other words,it’s a great way for a beginner to start out.The downside is that they’re not meant for production use,as they are often configured with minimal security to make them easier to use.We’ll discuss one of the more popular packages,called XAMPP.First,we’ll discuss installing everything the old- fashioned way. Installing Apache Apache needs to be installed and operational before PHP and MySQL can be installed,or else they won’t work correctly.Any computer can be turned into a web server by installing server software and connecting the machine to the Internet, which is why you need to install Apache.To keep the installation as simple as possi- ble,we’ll address only the latest versions of Apache,PHP,and MySQL.Although you can use older versions, they’re more difficult to install and get to work together. 1.Download the Apache 2.x Win32 MSI installer binary.It’s downloadable from the “Download from a mirror” link on the left side of the page and download the best available version.A mirror is a down- load location.The file that you save to your desktop will be named similarly to apache_2.2.4-win32-x86-no_ssl.msi (the exact version number will vary). If you are on Mac OS X,you already have Apache installed.Open Sys- tem Preferences,select the Sharing panel,and click to activate Per- sonal Web Sharing (which is actually Apache).Mac OS X 10.2,10.3, and 10.4 all come with different versions of Apache,but each works perfectly fine. 2.Install Apache using the Installation Wizard.Double-click the MSI installer file on your desktop, and you see the installer shown in Figure 2-1. The Installation Wizard walks you through the installation process. Developing Locally | 17 3.Accept the license terms by clicking the radio button shown in Figure 2-2.Click Figure 2-1.The Installation Wizard prompts you for basic configuration Figure 2-2.Apache license terms and conditions for use 18 | Chapter 2:Installation 4.You’ll see a Read This First box,as shown in Figure 2-3.Additionally,this win- dow offers a number of excellent resources related to the web server. Click Next. 5.In the dialog shown in Figure 2-4,enter all pertinent network information.Click Port 80 is the default HTTP port.In other words,when you request’re implicitly requesting port 80.By accept- ing this port,your web requests can be made without specifying a nondefault port.Your computer’s web server can always be accessed using the loopback address or the IP address http:// 127.0.0.1 . They can be used interchangeably. 6.In the next screen,shown in Figure 2-5,select the setup type.The Typical install will work for your purposes. Click Next. 7.Accept the default installation directory, as shown in Figure 2-6. Click Next. The default installation directory,C:\Program Files\Apache Software Foundation\Apache2.2\,is both standard and easy to find,especially when you need to make changes to your configuration. Figure 2-3.Apache HTTP Server information Developing Locally | 19 Figure 2-4.Server Network Information dialog Figure 2-5.Selecting a setup type 20 | Chapter 2:Installation 8.As Figure 2-7 shows,it’s time to begin the installation.Click Install.The installer installs a variety of modules,and you will see some DOS windows appear and disappear. 9.Click Finish when the installer is done. 10.Test your installation by entering in your browser’s location field.Remember,localhost is just the name that translates to the IP address 127.0.0.1, which is always the address of the local computer. 11.After entering the URL in your browser,the default Apache page displays,which is similar to the one shown in Figure 2-8.The installation was successful if you see the text “It works!” This page may be different depending on which version of Apache you install.Generally,if you see text that doesn’t mention an error, the installation was successful. Now that you can serve up web pages, you’re ready to add PHP. Figure 2-6.Destination Folder dialog for the Apache installation files Figure 2-7.“Ready to Install” dialog Developing Locally | 21 Installing PHP Go to to download the latest version of PHP;both binaries and source code can be found on this web site.Under Windows Binaries, select the PHP 5.x installer where x is the latest available version.Select a mirror site in your country from the list of mirrors to download the file: 1.The file that you save to your desktop will be named similarly to php-5.2.1- win32-installer.msi (the exact version number will vary). 2.Install PHP using the Installation Wizard.Double-click the MSI installer file on your desktop, and you’ll see the installer shown in Figure 2-9. Figure 2-8.Apache’s default index page after installation Figure 2-9.The PHP MSI installer 22 | Chapter 2:Installation 3.Click Next. The License Terms dialog appears as shown in Figure 2-10. 4.Click the checkbox to accept the licensing terms. Click Next. 5.The Destination Folder dialog appears (see Figure 2-11).Select the destination folder.You may use the default of C:\Program Files\PHP or C:\PHP (examples in this book that modify the PHP configuration files assume C:\PHP). Click Next. Figure 2-10.The License Terms dialog Figure 2-11.The installation directory for PHP Developing Locally | 23 6.The Web Server Setup dialog appears as shown in Figure 2-12.Select “Apache 2.2.x Module” and click Next.Naturally,if you were using a different web server, such as IIS, you could select that option here. 7.The Apache Configuration Directory dialog specifies where you installed Apache so that the installer can set up the Apache configuration to use PHP for you.It should be similar to C:\Program Files\Apache Software Foundation\Apache2.2\, as shown in Figure 2-13. 8.Figure 2-14 shows the “Choose Items to Install” dialog.The defaults on this dia- log are all OK.If you changed the base install directory,you may also need to change it here. Click Next. 9.Click Install on the “Ready to install” screen to confirm the installation. 10.Click Yes to confirm configuring Apache when the dialog shown in Figure 2-15 appears. 11.Click OK on the Apache Config dialog to acknowledge the successful Apache update for httpd.conf. 12.Click OK on the Apache Config dialog to acknowledge the successful Apache update for mime.types. 13.The Successful Installation dialog appears. Figure 2-12.The Web Server Setup dialog 24 | Chapter 2:Installation Statements prefixed by the hash sign ( # ) in HTML and PHP are con- sidered commented out and can be seen only by you—never your end user—in a browser window. Figure 2-13.Selecting the Apache install path Figure 2-14.The Installation Options dialog Developing Locally | 25 14.Restart the Apache server by selecting Start ➝ All Programs ➝ Apache HTTP Server 2.x.x ➝ Control Apache Server ➝ Restart,so that it can read the new con- figuration directives that the PHP installer placed in the httpd.conf configuration file.This file tells Apache to load the PHP process as a module.Alternatively,in the system tray, double-click the Apache icon and click the Restart button. To test the installation, do the following: 1.Create a PHP file in any text editor with the following line: <?php phpinfo(); ?> 2.Save the file as phpinfo.php,and then save it under the Apache htdocs directory, usually located at C:\ProgramFiles\Apache Software Foundation\Apache2.2\htdocs. It must have a file extension of.php or it won’t be processed as a PHP file. 3.Open your browser of choice. 4.Access the file you just created by typing into your browser’s location bar.You should see a page of information about your PHP setup, as shown in Figure 2-16. Enabling PHP on Mac OS X If you are on Mac OS X,you have PHP preinstalled on your computer,but it’s not enabled. You need to edit the Apache configuration file to enable PHP. The built-in search utilities for Mac OS X won’t find the configuration file you need to edit,as it’s considered a system file and hidden from novice users. You’ll need to use the Terminal to access this file. 1.Open Terminal from the Applications/Utilities folder. 2.Type: sudo vi /etc/httpd/httpd.conf 3.Enter your Mac OS X password for an Administrator account (or simply the first account set up on the Mac). Figure 2-15.Dialog confirming that the installer will configure Apache 26 | Chapter 2:Installation 4.To uncomment the line that loads the PHP module (by removing the hash [ # ] character at the beginning of the line), type: %s/#LoadModule php/LoadModule php/ Press Enter after the last slash.The %s command in vi performs a search and replace. 5.To uncomment the line that loads the PHP module, type: %s/#AddModule php/addModule php/ Skip steps 6 and 7 if you’re using Panther (10.3) or Tiger (10.4),as the required lines are already present in these versions. 6.Mac OS X 10.2 needs to map PHP index files by adding index.php to the DirectoryIndex directive by typing the following to replace index.html with index.html index.php : :%s/index.html/index.html index.php/ 7.Mac OS X 10.2 also needs to add this block of text to tell Apache that the PHP extensions must be processed as PHP files.The block of text must be added after the line: Include /private/etc/httpd/users Figure 2-16.Your PHP configuration details Developing Locally | 27 Type Go to add this text to the end of the file: <IfModule mod_php4.c> AddType application/x-httpd-php .php AddType application/x-httpd-php .php4 AddType application/x-httpd-php-source .phps </IfModule> 8.To save the changes, type: <escape>:wq where <escape> is the Escape key that exits the editing mode. 9.Restart Apache (Personal Web Sharing) from the System Preferences Sharing panel. 10.To create a test.php file to test your installation at the Terminal, type: vi ~/Sites/test.php o <?php phpinfo() ?> <escape>:wq where <escape> is the Escape key.This creates a file with the elusive.php file extension, since the built-in Mac OS X text editor likes to add.rtf to text files. 11.Navigate to the URL username/test.php where username is your short Mac OS X account name.If you’re unsure of your short name,select About This Mac fromthe Apple menu and click the More Info button.The short name appears in parentheses at the end of the username row. 12.The test.php page (similar to the PC installation) displays in your browser with a MySQL section. This indicates a successful installation. PHP should now be running on your Mac. Installing MySQL 5.0 The final component you need to develop and test pages on your local computer is MySQL. Now you’ll download the MySQL Installer: 1.Download the MySQL binaries.Both the binaries and the source code can be found at MySQL Community Server, click the Download button. 2.Click Windows. 3.Click the download link for Windows Essentials (x86).This file is a Windows MSI installer. 4.The link takes you to a page where you can either enter your personal info or just click No Thanks to download the file.A number of download locations are available;select one.Download the recommended latest version,currently 5.0. Save the installer file to your desktop. 5.Double-click the MSI installer file on your desktop.A setup wizard,shown in Figure 2-17, walks you through the installation process. Click Next. 28 | Chapter 2:Installation 6.Select the typical installation by clicking the Typical radio button shown in Figure 2-18, and then click Next. Figure 2-17.The MySQL Setup Wizard Figure 2-18.Select a setup type Developing Locally | 29 7.The “Ready to Install Program” dialog appears. Click Install. 8.MySQL installs files and then displays the MySQL.comSign-Up dialog shown in Figure 2-19.Select “Skip Sign-Up” and click Next,or sign up for an account, which provides access to a monthly newsletter as well as the ability to post bugs and comments on the online forums. 9.Click the “Configure the MySQL Server now” checkbox shown in Figure 2-20. Click Finish. 10.This brings up the MySQL Server Instance Configuration Wizard. Click Next. 11.Select the Standard Configuration radio button from the dialog shown in Figure 2-21. Click Next. 12.In the dialog shown in Figure 2-22,check both “Install As Window Service” and “Include Bin Directory in Windows PATH.” The second option allows you to run the MySQL command-line tools from the command prompt without being in the MySQL bin directory. Click Next. 13.Enter a password for the root user in the password and confirm fields shown in Figure 2-23.Click Next.You don’t need the Anonymous Account,since you can do everything with named accounts.Leave “Enable root access from remote machines” unchecked. 14.Click Execute on the MySQL Server Instance Configuration dialog. Figure 2-19.The MySQL.com account setup dialog 30 | Chapter 2:Installation Figure 2-20.The Configuration Wizard customizes the database settings Figure 2-21.Choose the level of detail dialog Developing Locally | 31 Figure 2-22.How to start MySQL and set up the system path Figure 2-23.Security settings for the database window 32 | Chapter 2:Installation 15.Click Finish,as shown in Figure 2-24.MySQL is now configured and running on your computer. At this point, all critical components—Apache, PHP, and MySQL—are installed. The wizard will informyou of basic problems during installation,such as running out of free disk space or not having proper permissions on your system to install MySQL. Installing the MySQL Connector There’s one last piece that you’ll need to download and install in order for PHP to be able to talk to MySQL.The Connector/PHP download provides two.dll files for PHP that are required to use MySQL: 1.Download the MySQL PHP Connector from connector/php/. 2.Unzip the file with a name similar to php_5.2.0_mysql_5.0.27-win32.zip. 3.Create a directory called C:\php\extensions. 4.Copy the two.dll files to this directory. 5.Also,copy the libmysql.dll file to C:\windows\system32 (or any other directory in the system path). Figure 2-24.Installation is complete Developing Locally | 33 6.Verify that the file C:\php\php.ini contains the following lines (the first line may not need any modification,while the second line may just need to be uncom- mented): extension_dir = C:\php\extensions extension=php_mysql.dll 7.Restart the Apache service. 8.Navigate to your phpinfo.php test page ().You should now see a section with the heading MySQL in the middle of the page.That sec- tion confirms that PHP can talk to MySQL. Mac OS X MySQL installation If you are running 10.3 or 10.4,you have the much easier option of installing the standalone.dpkg file from the MySQL web site.The installation for Mac OS X 10.2 is slightly more complex,as the binaries for 10.2 are no longer available from the MySQL web site.Instead,you’ll use a collection of software called Fink for the Mac. There are many Unix tools and services available through Fink that are preconfig- ured to work on your version of Mac OS X.To install MySQL using Mac OS X 10.2 and Fink: 1.Download Fink from. 2.Double-click on the installer package. 3.Accept the license terms. 4.Select the installation drive. 5.Accept the dialogs to modify your shell profile. 6.You’re now ready to use Fink to download and install MySQL.At the Terminal prompt, type: sudo apt-get install mysql sudo apt-get install mysql-client daemonic enable mysql 7.MySQL is now installed on your Mac. For 10.3 and 10.4,you may download and install the.dpkg files from the MySQL download page at- low the directions in the installer to accept the license terms and a disk on which to install. XAMPP XAMPP is available for Windows,Linux,and newer Mac OS X systems (Intel-based, OS X 10.4).XAMPP offers a simple,integrated approach to installing all the tools you need on multiple platforms.The following steps cover installing XAMPP on Windows, but the installation process is similar for all platforms: 34 | Chapter 2:Installation 1.Download the Basic Package XAMPP MSI installer found at. apachefriends.org/en/xampp-windows.html. 2.Double-click the MSI installer file on your desktop,and you’ll see the installer shown in Figure 2-25. 3.Select English and click the OK button. 4.The Setup Wizard appears as shown in Figure 2-26. Click Next. 5.The dialog shown in Figure 2-27 is displayed.Click Next to accept the default installation directory. Figure 2-25.The Language selection dialog Figure 2-26.The Xampp Setup Wizard Working Remotely | 35 6.The XAMPP Options dialog displays,as shown in Figure 2-28.Leave the Service Section checkboxes unchecked so you don’t install the components as services; instead, you’ll start them from the Control Panel. Click Install. 7.The Completing the XAMPP Setup Wizard displays. Click Finish. 8.The option to start the Control Panel displays as shown in Figure 2-29.Click Yes. 9.The Control Panel launches, as shown in Figure 2-30. The Control Panel can start and stop the services,as well as aid in their configu- ration. Working Remotely Although we recommend that you start out working locally,you can use an ISP account as long as it supports PHP and MySQL. You need login information to the remote server,and you may need to use your ISP’s web-based tool to create your database. To transfer your files and directories,you need to activate a File Transfer Protocol (FTP) account at your ISP,usually through your account control panel.Once you have an FTP login, upload your HTML and PHP files using an FTP client. Figure 2-27.Select the installation directory 36 | Chapter 2:Installation Your provider may require you to use Secure FTP (SFTP) instead of with your provider for details.Many FTP programs also support SFTP. While your computer likely has the command-line version of the FTP client,it can be cryptic to use.Graphical FTP clients make using FTP much easier.FTP Voyager, available from one FTP client you can use to upload files to your ISP.Your initial login screen looks similar to Figure 2-31.Fetch is a good FTP program for Mac. After connecting using Voyager,you’ll see a dialog similar to Figure 2-32.You can drag and drop the.php files you created.Remember,for your PHP files to run,you need to save them with an extension of.php instead of.html because the web server needs to know it’s a PHP file in order to run the PHP interpreter. Figure 2-28.Choose your installation options Figure 2-29.Installation is complete Working Remotely | 37 PHP files must be accessed through a web server,since your web browser doesn’t have the ability to interpret the PHP code.A PHP interpreter is used to process the PHP files. Figure 2-30.The Control Panel starts and stops the components Figure 2-31.FTP Voyager initial screen 38 | Chapter 2:Installation You’re ready to start learning all about basic facts,integration,and how to get your dynamic web page up and running as quickly and smoothly as possible.In Chapter 3 we’ll give you basic information about PHP and simple coding principles that apply to using PHP. Chapter 2 Questions Question 2-1 What three components must be installed to create a dynamic web site? Question 2-2 What OS has Apache installed already? Question 2-3 Where should you create a PHP directory for downloads? Question 2-4 What does the hash ( # ) sign mean? Question 2-5 How do you work remotely? Question 2-6 How do you transfer files to your ISP? Question 2-7 How must PHP files be accessed? See the “Chapter 2” section in the Appendix for the answers to these questions. Figure 2-32.FTP Voyager directory listing 39 Chapter 3 CHAPTER 3 Exploring PHP 3 With PHP,MySQL,and Apache installed,you’re ready to begin writing code.Unlike many languages,PHP doesn’t require complex tools such as compilers and debug- gers show- ing an image based on the current user’s browser,and printing a warning message if the user is browsing from an operating system that makes your web site look crummy. All this and more is possible with PHP, which makes these tricks simple. PHP and HTML Text It’s simple to output text using PHP;in fact,handling text is one of PHP’s special- ties.We’ll begin with detailing where PHP is processed,then look at some of the basic functions to output text,and from there go right into printing text based on a certain condition being true. Text Output (). Our examples demonstrate how similar HTML markup and PHP code look,and what you can do to start noticing the differences between them. 40 | Chapter 3:Exploring PHP Example 3-1 is a simple HTML file. Nothing is special here;it’s just your plain-vanilla HTML file.However,you can enter PHP right into this file;for example,let’s try to use PHP’s echo construct to output some text, as shown in Example 3-2. Separating PHP from HTML Although this example looks pretty simple,it actually wouldn’t work as it is,so there are some problems.There’s no way to tell in this file which part is standard HTML and which part is PHP.Therefore,the echo( ) command must be handled differ- ently. The fix is to surround your PHP code with <?php?> When you start writing PHP code,you’ll be working with simple text files that con- tain PHP and HTML code.HTML is a simple markup language that designates how your page looks in a browser,but it is simply that:text only.The server doesn’t have to process HTML files before sending them to the user’s browser.Unlike HTML code,PHP code must be interpreted before the resulting page is sent to the browser. Otherwise, the result will be one big mess on the user’s screen. To set apart the PHP code to inform the web server what needs to be processed,the PHP code is placed between formal or informal tags mixed with HTML.Example 3-3 uses print constructs to achieve this.The echo and print constructs work almost exactly the same,except echo can take more than one argument but doesn’t return any value,while print takes one argument.We chose hello.php as the filename;how- ever,you can choose any name you like as long as the filename has the extension.php. This tells the web server to process this file’s PHP code. Example 3-1.All you need to start with PHP is a simple HTML document <html> <head> <title>Hello World</title> </head> <body> <p>I sure wish I had something to say.</p> </body> </html> Example 3-2.A wrong way to add some PHP code to the HTML file <html> <head> <title>Hello World</title> </head> <body> echo "<p>Now I have something to say.</p>"; </body> </html> PHP and HTML Text | 41 When a browser requests this file,PHP interprets it and produces HTML markup. Example 3-4 is the HTML produced from the code in Example 3-3. Save your HTML document to your document root,as discussed in Chapter 2.Open the file in a web browser,and you see something like Figure 3-1.The code in Example 3-4 is the same code that you see if you select View ➝ Page Source from your browser’s menu.Make sure that you have the.php extension instead of an.html extension in the filename. Example 3-3.Correctly calling print in hello.php <html> <head> <title>Hello World</title> </head> <body> <?php print "Hello world!<br />"; print "Goodbye.<br />"; print "Over and out."; ?> </body> </html> Example 3-4.The HTML markup produced by the PHP code in Example 3-3 <html> <head> <title>Hello World</title> </head> <body> Hello world!<br />Goodbye.<br />Over and out. </body> </html> Figure 3-1.The output as it appears in the web browser 42 | Chapter 3:Exploring PHP While writing PHP code,it’s crucial to add comments so that your code is easier to read and support.Most people don’t remember exactly what they were thinking when they look at the code a year or more later,so let comments permeate your code,and you’ll be a happier PHPer in the future.PHP supports two styles of com- ments.We suggest using single-line comments for quick notes about a tricky part, and multiline comments when you need to describe something in greater depth;both are shown in Example 3-5. output the PHP comments.The interpreter outputs only the HTML In Example 3-5,two comment styles are used: // for single-line comments; /* ... */ for multiline comments.Keep in mind that if you want to place a comment in HTML markup, you need to use the open comment <!—— and close comment ——> A semicolon ( ; ) ends all code statements in PHP.Because of this,semicolons can’t be used in names.It’s good style as well as practical to also start a new line after your semicolon so the code is easier to read. Since PHP files tend to switch back and forth between PHP code and HTML markup,using an HTML comment in the middle of PHP or a PHP comment in the middle of HTML makes a mess of your page,so be extra vigilant not to do this! Example 3-5.Using comments to make your code easier to read <html> <head> <title>Hello World</title> </head> <body> <?php // A single line comment could say that we are going to // print hello world. /* This is how to do a multiline comment and could be used to comment out a block of code */ echo "Hello world!<br />"; echo "Goodbye.<br />"; ?> </body> </html> Coding Building Blocks | 43 The PHP files get to your web site just like any other file.To try the PHP code in Example 3-5,save the file in the document root that you selected when you installed Apache in Chapter 2.Once you have your PHP file—say,example.php—in your web-accessible directory,you can view it by browsing to http:// yourdomain .com/ your_ directory /example.php. Now that you know how to include PHP code properly within your HTML markup and not let your user see a bunch of gobbledygook,we’ll explore basic PHP program- ming. Coding Building Blocks To write programs in PHP that do something useful,you’ll need to understand blocks of reusable code called functions or methods,as well as how to temporarily store information that cannot be executed in variables.We talk about evaluations, which allow your code to make intelligent decisions based on mathematical princi- ples and user input. Variables Since we assume that some of you haven’t done any programming,we understand that variables may be a new concept.Figure 3-2 shows a newly created variable that has been assigned a value of 30 . In PHP, you define a variable with the following form: $variable_name = value; Pay very close attention to some key elements in the form of variables.The dollar sign ( $ ) must always fill the first space of your variable.The first character after the dollar sign must be either a letter or an underscore.It can’t under any circumstances be a number; otherwise, your code won’t execute, so watch those typos! Figure 3-2.A PHP variable holds a value in memory <?PHP $age = 30; ?> PHP page $age Memory 30 Named storage 44 | Chapter 3:Exploring PHP • PHP variables may be composed only of alphanumeric characters and under- scores; for example, a - z , A - Z , 0 - 9 , and _ . • Variables in PHP are case-sensitive.This means that $variable_name and $Variable_Name are different. • Variables with more than one word can be separated with underscores to make them easier to read; for example, $test_variable . • Variables can be assigned values using the equals sign ( = ). • Always end with a semicolon ( ; ) to complete the assignment of the variable. To create a simple PHP variable as in Figure 3-2, enter: <?php $age = 30; ?> This code takes the variable named age and assigns it the number 30 .You can use variables without having to know the specific value assigned to them. If you have a background in Java or C,you may be wondering why this looks so simple.PHP is not strongly typed,so it’s easy to define and use a variable without worrying what type it has. If you were to assign a new value to a variable with the same name,as happens in Example 3-6, the value referenced by the old name would be overwritten. The new value of $age replaces the old; this is the output: 31 Reading a variable’s value To access the value of a variable that’s already been assigned,simply specify the dol- lar sign ( $ ) followed by the variable name,and use it as you would the value of the variable in your code. You don’t have to clean up your variables when your programfinishes.They’re tem- porary because PHP automatically cleans them up when you’re done using them. Example 3-6.Reassigning a variable <?php $age = 30; $age = 31; echo $age; ?> Coding Building Blocks | 45 Variable types Variables all store certain types of data.PHP automatically picks a data variable based on the value assigned.These data types include strings,numbers,and more complex elements,such as arrays.We’ll discuss arrays later.What’s important to know is that unless you have a reason to care about the data type,PHP handles all of the details, so you don’t need to worry about them. In situations where a specific type of data is required,such as the mathematical divi- sion operation,PHP attempts to convert the data types automatically.If you have a string with a single “2,” it will be converted to an integer value of 2 .This conversion is nearly always exactly what you want PHP to do, and it makes coding seamless for you. Variable scope PHP helps keep your code organized by making sure that if you use code that some- one else wrote (and you very likely will),the names of the variables in your code don’t clash with other previously written variable names.For example,if you’re using a variable called $name that has a value of Bill ,and you use someone else’s code that also has a variable called $name but uses it to keep track of the filename log. txt,your value could get overwritten.Your code’s value for $name of Bill will be replaced by log.txt ,and your code will say Hello log.txt instead of Hello Bill , which would be a big problem. To prevent this from happening,PHP organizes code into functions.Functions allow you to group a chunk of code together and execute that code by its name.To keep variables in your code separate from variables in functions,PHP provides separate storage of variables within each function.This separate storage space means that the scope,or where a variable’s value can be accessed,is the local storage of the func- tion.Figure 3-3 demonstrates how there are distinct storage areas for a function’s variables. Example 3-7 shows how the variable you use outside of the function isn’t changed by the code within the function.Don’t worry too much about understanding how the function works yet, except that it has its own set of unique variables. Figure 3-3.The $age variable has a separate value outside of the birthday function’s variable storage area $age = 30 Birthday() $age = 1 echo $age Main file after execution Value $age comes from here Separate local variables 46 | Chapter 3:Exploring PHP This displays: 30 Although calling the function birthday assigns 1 to the variable $age ,it’s not access- ing the same variable that was defined on the main level of the program.Therefore, when you print $age ,you see the original value of 30 .The bolded part of the code is what is seen when $age is printed, because $age in birthday is a separate variable. If you really want to access or change the variable $age that was created by the birthday function from outside of that function, you would use a global variable. Global variables.Global variables allow you to cross the boundary between separate functions to access a variable’s value.The global statement specifies that you want the variable to be the same variable everywhere that it’s defined as global.Figure 3-4 shows how a global variable is accessible to everything. Example 3-7.The default handling of variable scope <?php // Define a function function birthday(){ // Set age to 1 $age = 1; } // Set age to 30 $age = 30; // Call the function birthday(); // Display the age echo $age; ?> Figure 3-4.The global keyword creates one global variable called $age Global namespace Main PHP file Assign age birthday() Increment age Display age $age 31 Coding Building Blocks | 47 Example 3-8 shows that use of a global variable can result in a change. This displays: 31 Global variables should be used sparingly because it’s easy to accidentally modify a variable without realizing what the consequences are.This kind of error can be very difficult to locate.Additionally,when we discuss functions in detail,you’ll learn that you can send in values to functions when you call themand get values returned from them when they’re done. You really don’t have to use global variables. If you want to use a variable in a specific function without losing the value each time the function ends,but you don’t want to use a global variable,you would use a static variable. Static variables.Static variables provide a variable that isn’t destroyed when a func- tion ends.You can use the static variable value again the next time you call the func- tion, and it will still have the same value as when it was last used in the function. Call and execute mean the same thing, as do function and method. The easiest way to think about this is to think of the variable as global but accessible to just that function.A static keyword is used to dictate that the variable you’re working with is static, as illustrated in Figure 3-5. Example 3-8.Using a global variable changes the result <?php // Define a function function birthday(){ // Define age as a global variable global $age; // Add one to the age value $age = $age + 1; } // Set age to 30 $age = 30; // Call the function birthday(); // Display the age echo $age; ?> 48 | Chapter 3:Exploring PHP In Example 3-9, we use the static keyword to define these function variables. This displays: Birthday number 1 Birthday number 2 Age: 30 Figure 3-5.The static variable creates a persistent storage space for $age in birthday Example 3-9.A static variable remembering its last value <?php // Define the function function birthday(){ // Define age as a static variable static $age = 0; // Add one to the age value $age = $age + 1; // Print the static age variable echo "Birthday number $age<br />"; } // Set age to 30 $age = 30; // Call the function twice birthday(); birthday(); // Display the age echo "Age: $age<br />"; ?> Global namespace Assign age= 30 birthday() age Static name Increment age= Display age age Call birthday twice Display age Value is saved between executions of birthday( ) Coding Building Blocks | 49 The XHTML markup <br/> tag is turned into line breaks when your browser displays the results. The value of $age is nowretained each time the birthday function is called.The value will stay around until the program quits.The value is saved because it’s declared as static .So far,we’ve discussed two types of variables,but there’s still one more to discuss, super globals. Super global variables.PHP uses special variables called super globals to provide infor- mation about the PHP script’s environment.These variables don’t need to be declared as global.They are automatically available,and they provide important information beyond the script’s code itself, such as values from a user’s input. Since PHP 4.01,the super globals are defined in arrays.Arrays are special collections of values that we’ll discuss in Chapter 6.The older super global variables such as those starting with $HTTP_* that were not in arrays still exist,but their use is not recommended,as they are deprecated.Table 3-1 shows the existing arrays since PHP 4.01. An example of a super global is $_SERVER["PHP_SELF"] .This variable contains the name of the running script and is part of the $_SERVER array (see Example 3-10). Enter the password to open this PDF file: File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Preparing document for printing… 0% Log in to post a comment
https://www.techylib.com/en/view/slicedmites/learning_php_and_mysql_online_tech_books_2
CC-MAIN-2019-04
refinedweb
12,843
56.96
Hi all, Recently I ran into the problem the boot code version readout (by IAP on a LPC804) is not behaving as described in API documentation of LPC804 SDK v2.6.0. In my tests run in DEBUG the second 32-bit word of the result array is always zero but in RELEASE build it gets random values. It seems the second 32-bit word is not read at all by the boot rom function to readout the boot code version. So I searched in the user manual (UM11065) and compared source code from "older" LPC804 example code bundle v1.6 and current SDK v2.6.0 (see below). Description of result according to UM11065 - User Manual LPC804 rev 1.3 (4.6.6 on page 37): Description: This command is used to read the boot code version number. Result: Result0: 2 bytes of boot code version number. Read as <byte1(Major)>.<byte0(Minor)> Code snippet from LPC804 Example Code Bundle v1.6: uint32_t ReadBootcodeVersion(void) { IAP.cmd = IAP_READ_BOOT_VER; IAP_Call (&IAP.cmd, &IAP.stat); printf("Boot Version is 0x%X\n\n",IAP.res[0] ); if (IAP.stat) return (0); return (IAP.res[0]); } Code snippet from LPC804 SDK v2.6.0: /*! * brief Read boot code version number. * This function is used to read the boot code version number. * * param bootCodeVersion Address to store the boot code version. * * retval #kStatus_IAP_Success Api was executed successfully. * note Boot code version is two 32-bit words. Word 0 is the major version, word 1 is the minor version. */ status_t IAP_ReadBootCodeVersion(uint32_t *bootCodeVersion) { uint32_t command[5], result[5]; command[0] = kIapCmd_IAP_Read_BootromVersion; iap_entry(command, result); bootCodeVersion[0] = result[1]; bootCodeVersion[1] = result[2]; return translate_iap_status(result[0]); } The result description in the user manual UM11065 seems to be inline with the implementation of LPC804 example code bundle v1.6 but the current SDK v2.6.0 is different and the API description of IAP_ReadBootCodeVersion() is not conform the user manual description. Please explain how I should interpret the result from IAP_Read_BootromVersion. Kind regards, Robert Hello Robert Beekmans , The User Manual LPC804 rev 1.3 is right, Boot code version has 2 bytes , Read as <byte1(Major)>.<byte0(Minor)>. So in the SDKv2.6, when read boot code version, you only need to read bootCodeVersion[0] = result[1]; I also test on my side: IAP read boot code version: ISP read Boot code. -------------------------------------------------------------------------------
https://community.nxp.com/thread/510487
CC-MAIN-2019-43
refinedweb
397
60.21
set_display_switch_mode man page set_display_switch_mode — Tells Allegro how the program handles background switching. Synopsis #include <allegro.h> int set_display_switch_mode(int mode); Description Sets how the program should handle being switched into the background, if the user tabs away from it. Not all of the possible modes will be supported by every graphics driver on every platform. The available modes are: SWITCH_NONE Disables switching. This is the default in single-tasking systems like DOS. It may be supported on other platforms, but you should use it with caution, because your users won't be impressed if they want to switch away from your program, but you don't let them! SWITCH_PAUSE Pauses the program whenever it is in the background. Execution will be resumed as soon as the user switches back to it. This is the default in most fullscreen multitasking environments, for example the Linux console, but not under Windows. SWITCH_AMNESIA Like SWITCH_PAUSE, but this mode doesn't bother to remember the contents of video memory, so the screen, and any video bitmaps that you have created, will be erased after the user switches away and then back to your program. This is not a terribly useful mode to have, but it is the default for the fullscreen drivers under Windows because DirectDraw is too dumb to implement anything better. SWITCH_BACKGROUND The program will carry on running in the background, with the screen bitmap temporarily being pointed at a memory buffer for the fullscreen drivers. You must take special care when using this mode, because bad things will happen if the screen bitmap gets changed around when your program isn't expecting it (see below). SWITCH_BACKAMNESIA Like SWITCH_BACKGROUND, but this mode doesn't bother to remember the contents of video memory (see SWITCH_AMNESIA). It is again the only mode supported by the fullscreen drivers under Windows that lets the program keep running in the background. Note that you should be very careful when you are using graphics routines in the switching context: you must always call acquire_screen() before the start of any drawing code onto the screen and not release it until you are completely finished, because the automatic locking mechanism may not be good enough to work when the program runs in the background or has just been raised in the foreground. Return Value Returns zero on success, invalidating at the same time all callbacks previously registered with set_display_switch_callback(). Returns -1 if the requested mode is not currently possible. See Also set_display_switch_callback(3), get_display_switch_mode(3), exmidi(3), exswitch(3) Referenced By exmidi(3), exswitch(3), get_display_switch_mode(3), set_display_switch_callback(3).
https://www.mankier.com/3/set_display_switch_mode
CC-MAIN-2017-30
refinedweb
429
58.92
Exploring Python's Stat Module Python's stat() module performs a stat system call on the given path and is used to get all information about a file or folder. It provides several information like inode number, size, number of hard links, time it was created and modified and much more. So before understanding what python's stat() module is we need to understand a bit about the UNIX File System which is a logical method to organize and store large amounts of information in a way that makes it easy to manage. We are going to take a look at how the files are stored in the unix system and what type of files are available in the unix system. The directory tree The files in unix system are organized in a similar manner as in a tree data structure and that's why known as a directory tree. At the very top of the file system there is a directory called "root" which is represented by "/" and all other files come below this. For now we don't need to understand what each of these things represent. Next we need to see what directories are. In unix directories are equivalent to folder in Windows, A directory file contains an entry for every file and sub-directory that it houses so if suppose the directory has 10 files there are going to be 10 entries in the directory and each directory has its two components. - Filename. - A unique identification number for the file or directory(called the inode number), this Inode number is different for each file and can represent any type of file i nthe unix system and everything that exists in unix has a Inode number. A unix file is stored in two different parts of the disk the data blocks and the inodes, The data blocks contain the "contents" of the file and the information is stored in the inode In a unix like system you an look up the inode numbers by typing this command in the terminal. ls -i And the output will be something like this. 13 bin 2 dev 14 lib 17 libx32 14183425 mnt 7091713 root 11556865 snap 1 sys The numbers in the starting are the Inode numbers and after that is the files(in unix directories are just special files). Apart from all this unix like operating systems define a user by a unique user id often abbreviated to uid and a group id abbreviated as gid all these are used to determine what files and what part of system can that user access. As we know files reside on a physical piece of hardware that can be a SSD, hard disk or anything, In unix those are also given a number and often referred as dev and the files that reside there can be referred by a number of names or we can create a hardlink for a file, A hardlink is essentially a label or a name given to a file and we can refer a file by many different names with the help of hardlink so this means we can access the same file content with different names. We know that if there exists a file it will acquire some space and that space is represented in bytes by the stat command in python. Unix also records the time when you open or modify a file or when the metadata of the file is changed these informations are also stored in the inode which we are going to see. Now let's dive into the python's stat module Python's stat() module performs a stat system call on the given path. You may ask what stat actually means? It means the status of a file or a directory and to be precise it return attributes about an inode. We are going to see some python code and some examples for the pythons stat() module. Syntax os.stat(path) Where the path is defined for the file or directory who's status is wanted. Now we are going to see some code import os #A folder in my system info_dir = os.stat('Music') print(info_dir) #A png file in my system info_file = os.stat('Pictures/85327.png') print(info_file) lets see the output. #output for "print(info_dir)" os.stat_result(st_mode=16877, st_ino=14183471, st_dev=66308, st_nlink=2, st_uid=1000, st_gid=1000, st_size=4096, st_atime=1583396013, st_mtime=1571541309, st_ctime=1571541309) #Output for "print(info_file)" os.stat_result(st_mode=33188, st_ino=14971652, st_dev=66308, st_nlink=1, st_uid=1000, st_gid=1000, st_size=119113, st_atime=1583320315, st_mtime=1581789576, st_ctime=1581789576) "Music" is a folder in my system and "85327.png" is a picture so we can see that the stat() module can be used for any type of file as well as directories. Understanding what the keywords mean st_mode It shows the inode protection mode, so what actually is a protection mode, let's look into it with a terminal command. ls -lai | tail Output 7091713 drwx------ 10 root root 4096 Mar 6 13:49 root 2 drwxr-xr-x 33 root root 940 Mar 9 17:06 run 18 lrwxrwxrwx 1 root root 8 Oct 20 03:00 sbin -> usr/sbin 11556865 drwxr-xr-x 13 root root 4096 Mar 6 14:31 snap 7748353 drwxr-xr-x 2 root root 4096 Oct 17 17:54 srv 12 -rw------- 1 root root 2147483648 Oct 20 02:59 swapfile 1 dr-xr-xr-x 13 root root 0 Mar 9 2020 sys 14971393 drwxrwxrwt 20 root root 4096 Mar 9 17:36 tmp 12082177 drwxr-xr-x 14 root root 4096 Oct 17 17:56 usr 13658113 drwxr-xr-x 15 root root 4096 Mar 8 02:48 var As we can see after the inode number some strangely written alphabets those are the permissions, and that shows who can access or read or change the file that is the inode protection mode. st_ino Inode number, Its simply the inode number assigned to the file. st_dev The disk partition on which the inode is residing, On which storage device the inode is residing. st_nlink Number of hard links, A hard link is a directory entry that associates a name with a file on a file system. st_uid It contains the user id of the owner which has the rights to access and modify the inode, suppose there are 2 users on the system so files will have different user id which can assess the file. st_gid Group id of owner that has the rights to the inode. st_size Size in bytes of a plain file, Amount of data waiting on some special files. st_atime Time of most recent access or the last access. st_mtime Time of most recent or the last content modification. st_ctime The “ctime” as reported by the operating system. On some systems (like Unix) is the time of the last metadata change, and, on others (like Windows), is the creation time. The time doesn't looks like it is in a human readable form, lets do something about it. import stat, time print(time.ctime(info_dir[stat.ST_ATIME])) Output Wed Mar 4 16:41:55 2020 Now its good, you can use the similar syntax for st_mtime and st_ctime. Use cases - Suppose you are writing to a file and want to check if some updates were made or not you can check by the last modified time. - You can check the file size directly from your code. - There are more uses but are a bit complex and can be thought of according to the problem faced. Resources - Documentation for the stat module. - List of Python topics at OpenGenus
https://iq.opengenus.org/stat-module-python/
CC-MAIN-2020-24
refinedweb
1,280
63.53
This chapter describes how to use a databound ADF gauge component to display data, and provides the options for gauge customization. This chapter includes the following sections: Section 25.1, "Introduction to the Gauge Component" Section 25.2, "Understanding Data Requirements for Gauges" Section 25.3, "Creating a Gauge" Section 25.4, "Customizing Gauge Type, Layout, and Appearance" Section 25.5, "Adding Gauge Special Effects and Animation" Section 25.6, "Using Custom Shapes in Gauges" For information about the data binding of gauges, see the "Creating Databound ADF Gauges" section in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. Gauges identify problems in data. A gauge usually plots one data point with an indication of whether that point falls in an acceptable or an unacceptable range. Frequently, you display multiple gauges in a single gauge set. The gauges in a set usually appear in a grid-like format with a configurable layout. A Component Gallery displays available gauge categories, types, and descriptions to provide visual assistance when creating gauges and using a quick-start layout. Figure 25-1 shows the Component Gallery for gauges. When a gauge component is inserted into a JSF page using the Component Gallery, a set of child tags that support customization of the gauge is automatically inserted. Example 25-1 shows the code inserted in the JSF page for a dial gauge with the quick-start layout selected in the Component Gallery in Figure 25-1. Example 25-1 Gauge Sample Code <dvt:gauge <dvt:gaugeBackground> <dvt:specialEffects <dvt:gradientStopStyle/> </dvt:specialEffects> </dvt:gaugeBackground> <dvt:gaugeFrame/> <dvt:indicator/> <dvt:indicatorBase/> <dvt:gaugePlotArea/> <dvt:tickLabel/> <dvt:tickMark/> <dvt:topLabel/> <dvt:bottomLabel/> <dvt:metricLabel <dvt:thresholdSet> <dvt:threshold <dvt:threshold </dvt:thresholdSet> </dvt:gauge> Gauges are displayed in a default size of 200 X 200 pixels. You can customize the size of a gauge or specify dynamic resizing to fit an area across different browser window sizes. When gauges are displayed in a horizontally or vertically restricted area, for example in a web page sidebar, the gauge is displayed in a small image size. Although fully featured, the smaller image is a simplified display. The following types of gauges are supported by the gauge component: Dial: Indicates its metric along a 220 degree arc. This is the default gauge type. Figure 25-2 shows a dial gauge indicating a Plasma HD TV stock level within an acceptable range. Status Meter: Indicates the progress of a task or the level of some measurement along a rectangular bar. An inner rectangle shows the current level of a measurement against the ranges marked on an outer rectangle. Figure 25-3 shows the Plasma HD TV stock level using a status meter gauge. Status Meter (vertical): Indicates the progress of a task or the level of some measurement along a vertical rectangular bar. Figure 25-4 shows the Plasma HD TV stock level using a vertical status meter gauge. LED (light-emitting diode): Graphically depicts a measurement, such as a key performance indicator (KPI). Several styles of graphics are available for LED gauges, such as arrows that indicate good (up arrow), fair (left- or right-pointing arrow), or poor (down arrow). Figure 25-5 shows the Plasma HD TV stock level using a LED bulb indicator. Figure 25-6 shows the same stock level using a LED arrow. For dial and status meter gauges, a tooltip of contextual information automatically displays when a users moves a mouse over the plot area, indicator, or threshold region. Figure 25-7 shows the indicator tooltip for a dial gauge. Gauge terms identify the many aspects of a gauge and gauge set that you can customize. The gauge component includes approximately 20 child tags that provide options for this customization. The parts of a gauge that can be customized are: Overall gauge customization: Each item in this group is represented by a gauge child tag: Gauge Background: Controls border color and fill color for the background of a gauge. Gauge Set Background: Controls border color and fill color for the background of a gauge set. Gauge Frame: Refers to the frame behind the dial gauge. Plot Area: Represents the area inside the gauge itself. Indicator: Points to the value that is plotted in a dial gauge. It is typically in the form of a line or an arrow. Indicator Bar: The inner rectangle in a status meter gauge. Indicator Base: The circular base of a line or needle style indicator in a dial gauge. Threshold Set: Specifies the threshold sections for the metrics of a gauge. You can create an infinite number of thresholds for a gauge. Data values: These include the metric (which is the actual value that the gauge is plotting), minimum value, maximum value, and threshold values. Section 25.2, "Understanding Data Requirements for Gauges" describes these values. Labels: The gauge supports the following elements with a separate child tag for each item: Bottom Label: Refers to an optional label that appears below or inside the gauge. By default displays the label for the data row. Lower Label Frame: Controls the colors for the background and border of the frame that contains the bottom label. The metric label can also appear inside the lower label frame, to the right of the bottom label. Metric Label: Shows the value of the metric that the gauge is plotting in text. Tick Marks: Refers to the markings along the value axis of the gauge. These can identify regular intervals, from minimum value to maximum value, and can also indicate threshold values. Tick marks can specify major increments that may include labels or minor increments. Tick Labels: Displays text that is displayed to identify major tick marks on a gauge. Top Label: Refers to the label that appears at the top or inside of a gauge. By default, a title separator is used with a label above the gauge. By default, displays the label for the data column. Upper Label Frame: Refers to the background and border of the frame that encloses the top label. You can specify border color and fill color for this frame. Turn off the default title separator when using this frame. Legend: The gauge supports the gauge legend area, text, and title elements with a separate child tag for each item. Shape Attributes Set: The gauge supports interactivity properties for its child elements. For example, the alt text of a gauge plot area can be displayed as a tooltip when the user moves the mouse over that area at runtime. For more information, see Section 25.5.3, "How to Add Interactivity to Gauges." You can provide the following kinds of data values for a gauge: Metric: The value that the gauge is to plot. This value can be specified as static data in the Gauge Data attributes category in the Property Inspector. It can also be specified through data controls or through the tabularData attribute of the dvt:gauge tag. This is the only required data for a gauge. The number of metric values supplied affects whether a single gauge is displayed or a series of gauges are displayed in a gauge set. Minimum and maximum: Optional values that identify the lowest and highest points on the gauge value axis. These values can be provided as dynamic data from a data collection. They can also be specified as static data in the Gauge Data attributes category in the Property Inspector for the dvt:gauge tag. For more information, see Section 25.4.4, "How to Add Thresholds to Gauges." Threshold: Optional values that can be provided as dynamic data from a data collection to identify ranges of acceptability on the value axis of the gauge. You can also specify these values as static data using gauge threshold tags in the Property Inspector. For more information, see Section 25.4.4, "How to Add Thresholds to Gauges." The only required data element is the metric value. All other data values are optional. You can use any of the following ways to supply data to a gauge component: ADF Data Controls: You declaratively create a databound gauge by dragging and dropping a data collection from the ADF Data Controls panel. Figure 25-8 shows the Create Gauge dialog where you configure the metric value, and optionally the minimum and maximum and threshold values for a gauge you are creating. For more information, see the “Creating Databound ADF Gauges" section in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. Tabular data: You can provide CSV (comma-separated value) data to a gauge through the tabularData attribute of the dvt:gauge tag. The process of creating a gauge from tabular data includes the following steps: Storing the data in a method in the gauge's managed bean. Creating a gauge that uses the data stored in the managed bean. The tabularData attribute of a gauge component lets you specify a list of metric values that the gauge uses to create a grid and to populate itself. You can provide only the metric value through the tabularData attribute. Therefore, you must specify any desired thresholds and minimum or maximum values through the Property Inspector. A gauge component displays rows and columns of gauges. The text that you specify as column labels appears in the top label of the gauges. The text that you specify as row labels appears in the bottom label of the gauges. The list that contains the tabular data consists of a three-member Object array for each data value to be passed to the gauge. The members of each array must be organized as follows: The first member (index 0) is the column label, in the grid, of the data value. This is generally a String. The second member (index 1) is the row label, in the grid, of the data value. This is generally a String. The third member (index 2) is the data value, which is usually Double. Figure 25-9 has five columns: Quota, Sales, Margin, Costs, and Units. The example has three rows: London, Paris, and New York. This data produces a gauge set with five gauges in each row and lets you compare values such as sales across the three cities. Example 25-2 shows code that creates the list of tabular data required for the gauge that compares annual results for three cities. Example 25-2 Code to Create a List of Tabular Data for a Gauge public List getGaugeData() { ArrayList list = new ArrayList(); String[] rowLabels = new String[] {"London", "Paris", "New York"}; String[] colLabels = new String[] {"Quota", "Sales", "Margin", "Costs", "Units"}; double [] [] values = new double[][]{ {60, 90, 135}, {50, -100, -150}, {130, 140, 150}, {70, 80, -130}, {110, 120, 130} }; for (int c = 0; c < colLabels.length; c++) { for (int r = 0; r < rowLabels.length; r++) { list.add (new Object [] {colLabels[c], rowLabels[r], new Double (values [c][r])}); } } return list; } Use the tabularData attribute of the gauge tag to reference the tabular data that is stored in a managed bean. To create a gauge that uses tabular data from a managed bean: In the ADF Data Visualizations page of the Component Palette, Gauge panel, drag and drop a Gauge onto the page. In the Component Gallery, select the category, type, and quick-start layout style for the gauge that you are creating. In the Gauge Data category of the Property Inspector, from the tabularData attribute dropdown menu, choose Expression Builder. In the Expression Builder dialog, use the search box to locate the managed bean. Expand the managed bean node and select the method that creates the list of tabular data. Click OK. The Expression is created. For example, if the name of the managed bean is sampleGauge and the name of the method that creates the list of tabular data is getGaugeData, the Expression Builder generates the code #{sampleGauge.gaugeData} as the value for the tabularData attribute of the dvt:gauge tag. When you create a gauge tag that is powered by data obtained from a list referenced in the tabularData attribute, the following results occur: A gauge is generated with a setting in its tabularData attribute. The settings for all other attributes for this gauge are provided by default. You have the option of changing the setting of the gaugeType attribute in the Property Inspector to DIAL, LED, STATUSMETER, or VERTICALSTATUSMETER. Gauge components can be customized in the following ways: Change the gauge type Specify the layout of gauges in a gauge set Change a gauge size and style Add thresholds Format numbers and text Specify an N-degree dial gauge Customize gauge labels Customize indicators and tick marks Specifying transparency in gauges You can change the type of a gauge using the gaugeType attribute of the dvt:gauge tag. The gauge type is reflected in the visual editor default gauge. To change the type of a gauge: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Property Inspector, choose a gauge type from the GaugeType attribute dropdown list. Valid values are DIAL, LED, STATUSMETER, or VERTICALSTATUSMETER. A single gauge can display one row of data bound to a gauge component. A gauge set displays a gauge for each row in multiple rows of data in a data collection. You can specify the location of gauges within a gauge set by specifying values for attributes in the dvt:gauge tag. To specify the layout of gauges in a gauge set: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Property Inspector, select the Common attributes category. To determine the number of columns of gauges that will appear in a gauge set, specify a value for the gaugeSetColumnCount attribute. A setting of zero causes all gauges to appear in a single row. Any positive integer determines the exact number of columns in which the gauges are displayed. A setting of -1 causes the number of columns to be determined automatically from the data source. To determine the placement of gauges in columns, specify a value for the gaugeSetDirection attribute. If you select GSD_ACROSS, then the default layout of the gauges is used and the gauges appear from left to right, then top to bottom. If you select GSD_DOWN, the layout of the gauges is from top to bottom, then left to right. To control the alignment of gauges within a gauge set, specify a value for the gaugeSetAlignment attribute. This attribute defaults to the setting GSA_NONE, which divides the available space equally among the gauges in the gauge set. Other options use the available space and optimal gauge size to allow for alignment towards the left or right and the top or bottom within the gauge set. You can also select GSA_CENTER to center the gauges within the gauge set. You can customize the width and height of a gauge, and you can allow for dynamic resizing of a gauge based on changes to the size of its container. You can also control the style sheet used by a gauge. These two aspects of a gauge are interrelated in that they share the use of the gauge inlineStyle attribute. You can specify the initial size of a gauge by setting values for attributes of the dvt:gauge tag. If you do not also provide for dynamic resizing of the gauge, then the initial size becomes the only display size for the gauge. To specify the size of a gauge at its initial display: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Style attributes category of the Property Inspector, enter a value for the InlineStyle attribute of the dvt:gauge tag. For example: inlineStyle="width:200px;height:200px" You must enter values in each of two attributes of the dvt:gauge tag to allow for a gauge to resize when its container in a JSF page changes in size. The values that you specify for this capability also are useful for creating a gauge component that fills an area across different browser window sizes. To allow dynamic resizing of a gauge: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Behavior attributes category of the Property Inspector for the DynamicResize attribute, select the value DYNAMIC_SIZE. In the Style attributes category of the Property Inspector,;" You have the option of specifying a custom style class for use with a gauge. However, you must specify width and height in the inlineStyle attribute. To specify a custom style class for a gauge: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Style attributes category of the Property Inspector, for the StyleClass attribute, select Edit from the Property menu choices, and select the CSS style class to use for this gauge.;". Thresholds are data values in a gauge that highlight a particular range of values. Thresholds must be values between the minimum and the maximum value for a gauge. The range identified by a threshold is filled with a color that is different from the color of other ranges. The data collection for a gauge can provide dynamic values for thresholds when the gauge is databound. After the gauge is created, you can also insert a dvt:thresholdSet tag and individual dvt:threshold tags to create static thresholds. If threshold values are supplied in both the data collection and in threshold tags, then the gauge honors the values in the threshold tags. You can create an indefinite number of thresholds in a gauge. Each threshold is represented by a single dvt:threshold tag. One dvt:thresholdSet tag must wrap all the threshold tags. To add static thresholds to a gauge: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Threshold Set. You do not need to specify values for attributes on the dvt:thresholdSet tag. Right-click the dvt:thresholdSet node and choose Insert inside dvt:thresholdSet > threshold. In the Property Inspector, enter values for the attributes that you want to customize for this threshold. You have the option of entering a specific fill color and border color for the section of the gauge related to the threshold. You can also identify the maximum value for the threshold and any text that you want to display in the legend to identify the threshold. Note:For the final threshold, the maximum value of the gauge is used as the threshold maximum value regardless of any entry you make in the threshold tag for the final threshold. Repeat Step 2 and Step 3 to create each threshold in the gauge from the lowest minimum value to the highest maximum value. You have the option of adding any number of thresholds to gauges. However, arrow and triangle LED gauges support thresholds only for the three directions to which they choose Insert inside dvt:gauge > ADF Data Visualization > metricLabel. If you want to display the metric value as a percentage rather than as a value, then set the NumberType attribute of the dvt:metricLabel tag to NT_PERCENT. If you want to specify additional formatting for the number in the metric label, do the following: Right-click the metricLabel node and choose Insert inside dvt:metricLabel >. When you add a metric label and number formatting to a gauge, XML code is generated. Example 25-3 shows a sample of the XML code that is generated. You can format text in any of the following gauge tags that represent titles and labels in a gauge: dvt:bottomLabel dvt:gaugeMetricLabel dvt:gaugeLegendText dvt:gaugeLegendTitle dvt:tickLabel dvt:topLabel The procedure for formatting text in gauge labels and titles is similar except that you insert the appropriate child tag that represents the gauge label or title. For example, you can use a dvt:gaugeFont child tag to a dvt:metricLabel tag to specify gauge metric label font size, color, and if the text should be bold or italic. To format text in a gauge metric label: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > metricLabel. Right-click the metricLabel node and choose Insert inside dvt:metricLabel > Font. In the Property Inspector, specify values in the attributes of the dvt:gaugeFont tag to produce the desired formatting. When you format text in a gauge metric label using the gaugeFont tag, XML code is generated. Example 25-4 shows a sample of the XML code that is generated. You can specify a gauge that sweeps through angles other than the standard 220-degree arc in a dial gauge. Set the angleExtent attribute to specify the range of degrees in the gauge. For example, to create a 270 degree dial gauge, set the angleExtent attribute as follows: <dvt:gauge. You can control the positioning of gauge labels. You can also control the colors and borders of the gauge label frames. You can specify whether you want labels to appear outside or inside a gauge by using the position attribute of the appropriate label tag. The following label tags are available as child tags of dvt:gauge: dvt:bottomLabel dvt:metricLabel dvt:topLabel The procedure for controlling the position of gauge labels is similar except that you insert the appropriate child tag that represents the gauge label. For example, you can use the dvt:bottomLabel child tag to position the gauge and specify label text. To specify the position of the bottom label: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Bottom Label. In the Property Inspector, for the position attribute, select the desired location of the label. In the text attribute, enter the text that you want the label to display. You can control the fill color and border color of the frames for the top label and the bottom label. The dvt:upperLabelFrame and dvt:lowerLabelFrame gauge child tags serve as frames for these labels. To customize the color and border of the upper label frame: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Upper Label Frame. In the Property Inspector, select the desired colors for the borderColor attribute and the fillColor attribute. Use a similar procedure to customize the color and border of the bottom label frame using the dvt:bottomLabel tag as a child of the gauge node. There are a variety of options available for customizing the indicators of gauges and the location and labeling of tick marks. The following gauge child tags are available to customize the indicator of a gauge: dvt:indicator: Specifies the visual properties of the dial gauge indicator needle or the status meter bar. Includes the following attributes: borderColor: Specifies the color of the border of the indicator. fillColor: Specifies the color of the fill for the indicator. type: Identifies the kind of indicator: a line indicator, a fill indicator, or a needle indicator. useThresholdFillColor: Determines whether the color of the threshold area in which the indicator falls should override the specified color of the indicator. dvt:indicatorBar: Contains the fill properties of the inner rectangle (bar) of a status meter gauge. dvt:indicatorBase: Contains the fill properties of the circular base of a line and needle style indicator of a dial gauge. To customize the appearance of gauge indicators: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Indicator. In the Property Inspector, specify values for the desired attributes. If you want to customize the fill attributes of the inner bar on a status meter gauge, in the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Indicator Bar. In the Property Inspector, specify values for the desired attributes. If you want to customize the circular base of a line style indicator on a dial gauge, in the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Indicator Base. In the Property Inspector, specify values for the desired attributes. The following gauge child tags are available to customize tick marks and tick labels for a gauge: dvt:tickMark: Specifies the display, spacing, and color of major and minor tick marks. Only major tick marks can include value labels. Includes the following attributes: majorIncrement and minorIncrement: Sets the distance between two major tick marks and two minor tick marks, respectively. If the value is less than zero for either attribute, the tick marks are not displayed. majorTickColor and minorTickColor: Sets the hexidecimal color of major tick marks and minor tick marks, respectively. content: Specifies where tick marks occur within a gauge set. Valid values are any combination separated by spaces or commas including: TC_INCREMENTS: Display tick marks in increments. TC_MAJOR_TICK: Display tick marks for minimum, maximum, and incremental values. TC_MIN_MAX: Display tick marks for minimum and maximum values. TC_METRIC: Display tick marks for actual metric values. TC_NONE: Display no tick marks. TC_THRESHOLD: Display tick marks for threshold values. dvt:tickLabel: Identifies major tick marks that will have labels, the location of the labels (interior or exterior of the gauge), and the format for numbers displayed in the tick labels. To customize the tick marks and tick labels of a gauge: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Tick Mark. In the Property Inspector, specify values for the desired attributes. In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Tick Label. In the Property Inspector, specify values for the desired attributes. By default, the dial gauge displays interior tick labels to provide a cleaner look when the gauge is contained entirely within the gauge frame. Because the tick labels lie within the plot area, the length of the tick labels must be limited to fit in this space. You can customize your gauge to use exterior labels. To create interior tick labels on a gauge: In the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Tick Mark. In the Property Inspector, select TLP_POSITION for the Position attribute. You can specify that various parts of a gauge show transparent colors by setting the borderColor and fillColor attributes on the gauge child tags related to these parts of the gauge. These color properties accept a 6 or 8 RGB hexidecimal value. When an 8-digit value is used, the first two digits represent transparency. For example, you can set transparency by using a value of 00FFFFFF. Any gauge child tag that supports borderColor or fillColor attributes can be set to transparency. The following list are examples of parts of the gauge that support transparency: Gauge background: Use the dvt:gaugeBackground tag. Gauge gauge frame: Use the dvt:gaugeFrame tag. Gauge plot area: Use the dvt:gaugePlotArea tag. Gauge legend area: Use the dvt:gaugeLegendArea tag. These gauge features are used less frequently than the common gauge features. These special features include applying gradient effects to parts of a gauge, adding interactivity to gauges, animating gauges, and taking advantage of the gauge support for active data. A gradient is a special effect in which an object changes color gradually. Each color in a gradient is represented by a stop. The first stop is stop 0, the second is stop 1, and so on. You must specify the number of stops in the special effects for a subcomponent of a gauge that supports special effects. You can define gradient special effects for the following subcomponents of a gauge: Gauge background: Uses the dvt:gaugeBackground tag. Gauge set background: Uses the dvt:gaugeSetBackground tag. Gauge plot area: Uses the dvt:gaugePlotArea tag. Gauge frame: Uses the dvt:gaugeFrame tag. Gauge legend area: Uses the dvt:gaugeLegendArea tag. Lower label frame: Uses the dvt:lowerLabelFrame tag. Upper label frame: Uses the dvt:upperLabelFrame tag. Indicator: Uses the dvt:indicator tag. Indicator bar: Uses the dvt:indicatorBar tag. Indicator base: Uses the dvt:indicatorBase tag. Threshold: Uses the dvt:threshold tag. The approach that you use to define gradient special effects is identical for each part of the gauge that supports these effects. For each subcomponent of a gauge to which you want to add special effects, you must insert a dvt:specialEffects tag as a child tag of the subcomponent. For example, if you want to add a gradient to the background of a gauge, then you would create one dvt:specialEffects tag that is a child of the dvt:background tag. You must also set the dvt:specialEffects tag fillType property to FT_GRADIENT. Then, optionally if you want to control the rate of change for the fill color of the subcomponent, you would add as many dvt:gradientStopStyle tags as you need to control the color and rate of change for the fill color of the component. These dvt:gradientStopStyle tags then must be entered as child tags of the single dvt:specialEffects tag. If you have not inserted a dvt:gaugeBackground tag as a child of the gauge, in the Structure window, right-click the gauge node and choose Insert inside dvt:gauge > ADF Data Visualization > Gauge Background. To add a gradient special effect to the background of a gauge: In the Structure window, right-click the gauge background node and choose Insert inside dvt:gaugeBackground > Special Effects. Use the Property Inspector to enter values for the attributes of the dvt:specialEffects tag: For fillType attribute, select FT_GRADIENT. For gradientDirection attribute, select the direction of change that you want to use for the gradient fill. For the numStops attribute, enter the number of stops to use for the gradient. Optionally, in the Structure window, right-click the special effects node and choose Insert within dvt:specialEffects > dvt:gradientStopStyle if you want to control the color and rate of change for each gradient stop. Use the Property Inspector to enter values for the attributes of the dvt:gradientStopStyle tag: For the stopIndex attribute, enter a zero-based integer as an index within the dvt:gradientStopStyle tags that are included within the dvt:specialEffects tag. For the gradientStopColor attribute, enter the color that you want to use at this specific point along the gradient. For the gradientStopPosition attribute, enter the proportional distance along a gradient for the identified stop color. The gradient is scaled from 0 to 100. If 0 or 100 is not specified, default positions are used for those points. Repeat Step 3 and Step 4 for each gradient stop that you want to specify. When you add a gradient fill to the background of a gauge and specify two stops, XML code is generated. Example 25-5 shows the XML code that is generated. Example 25-5 XML Code Generated for Adding a Gradient to the Background of a Gauge <dvt:gauge > <dvt:gaugeBackground <dvt:specialEffects <dvt:gradientStopStyle <dvt:gradientStopStyle </dvt:specialEffects> </dvt:gaugeBackground> </dvt:gauge> You can specify interactivity properties on subcomponents of a gauge using one or more dvt:shapeAttributes tags wrapped in a dvt:shapeAttributesSet tag. The interactivity provides a connection between a specific subcomponent and an HTML attribute or a JavaScript event. Each dvt:shapeAttributes tag must contain a subcomponent and at least one attribute in order to be functional. For example, Example 25-6 shows the code for a dial gauge where the tooltip of the indicator changes from "Indicator" to "Indicator is Clicked" when the user clicks the indicator, and the tooltip for the gauge metric label displays "Metric Label" when the user mouses over that label at runtime. Example 25-6 Sample Code for Gauge shapeAttributes Tag <dvt:gauge > <dvt:shapeAttributesSet> <dvt:shapeAttributes <dvt:shapeAttributes </dvt:shapeAttributesSet> </dvt:gauge> You can also use a backing bean method to return the value of the attribute. Example 25-7 shows sample code for referencing a backing bean and Example 25-8 shows the backing bean sample code. Example 25-7 Gauge shapeAttributes Tag Referencing a Backing Bean <dvt:gauge > <dvt:shapeAttributesSet> <dvt:shapeAttributes <dvt:shapeAttributes </dvt:shapeAttributesSet> </dvt:gauge> Example 25-8 Sample Backing Bean Code public String alt(oracle.dss.dataView.ComponentHandle handle) { return handle.getName(); } public String onClick(oracle.dss.dataView.ComponentHandle handle) { return ("document.title=\"onClick\";"); } public String onMouseMove(oracle.dss.dataView.ComponentHandle handle) { return ("document.title=\"onMouseMove\";"); } The following gauge subcomponents support the dvt:shapeAttributes tag: GAUGE_BOTTOMLABEL - the label below the gauge GAUGE_INDICATOR - the indicator in the gauge GAUGE_LEGENDAREA - the legend area of the gauge GAUGE_LEGENDTEXT - the text label of the legend area GAUGE_METRICLABEL - the label showing the metric value GAUGE_TOPLABEL - the label above the gauge GAUGE_PLOTAREA - the area inside the gauge GAUGE_THRESHOLD - the threshold area of the gauge You can animate gauges (not gauge sets) to show changes in data, for example, a dial gauge indicator can change color when a data value increases or decreases. Figure 25-10 shows a dial gauge with the dial indicator animated to display the data change at each threshold level. The attributes for setting animation effects on gauges are: animationOnDisplay: Use to specify the type of initial rendering effect to apply. Valid values are: NONE (default): Do not show any initial rendering effect. AUTO: Apply an initial rendering effect automatically chosen based on graph or gauge type. animationOnDataChange: Use to specify the type of data change animation to apply. Valid values are: NONE: Apply no data change animation effects. AUTO (default): Apply Active Data Service (ADS) data change animation events. For more information about ADS, see Section 25.5.5, "How to Animate Gauges with Active Data." ON: Apply partial page refresh (PPR) data change animation events. Use this setting to configure the application to poll the data source for changes at prescribed intervals. Animation effects using Active Data Service (ADS) can be added to dial and status meter gauge types. ADS allows you to bind ADF Faces components to an active data source using the ADF model layer. To allow this, you must configure the components and the bindings so that the components can display the data as it is updated in the data source. In order to use the Active Data Service, you must: Have a data source that publishes events when data is changed Create business services that react to those events and the associated data controls to represent those services For more information about ADS and configuring your application, see the "Using the Active Data Service" chapter in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. You configure a databound gauge to display active data by setting a value on the binding element in the corresponding page definition file. To configure a gauge to display active data: In the Structure window, right-click the gauge node, and choose Go To Properties. In the Property Inspector, enter a unique value in the ID field. If you do not select an identifier, one will be entered for you. Open the page's associated page definition file. In the Structure window for the page definition file, select the node that represents the attribute binding for the component. In the Property Inspector, select Push for the ChangeEventPolicy attribute. After configuring the gauge component to display active data, set animation effects using the attributes defined in Section 25.5.4, "How to Animate Gauges." You can directly specify the graphics for a gauge to create custom gauge shapes. The customShapesPath attribute is set to point to the vector graphics file that is processed into graphics used for output. JDeveloper also provides a set of custom shape styles accessible by using the customShapesPath attribute. Due to the requirements for rotating and resizing a gauge's components, such as the plot area or tick marks, a vector graphics file is required when creating a custom shapes graphic file. Scalable Vector Graphics (SVG) is the supported file format for creating custom shapes for gauges. After designing the gauge and exporting it to an SVG file, a designer can add information to identify, scale, and position the gauge shapes and components, and to specify other metadata used in processing. In the SVG file, gauge components are identified using an ID. For example, an SVG file with <polygon id="indicator"/> would be interpreted as using a polygon shape for the indicator component. To specify multiple shapes to create the desired visual for a component, the ID can be modified as in id="indicator_0", id="indicator_1", and id="indicator_2". Table 25-1 shows the gauge component IDs and their descriptions. Table 25-2 shows the metadata IDs and the descriptions used for internal calculations, not rendered in the gauge. Example 25-9 shows a sample SVG file used to specify custom shapes for the components of a gauge. Example 25-9 Sample SVG File Used for Gauge Custom Shapes <?xml version="1.0" encoding="UTF-8" standalone="no"?> <svg xmlns: <rect width="264.72726" height="179.18887" rx="8.2879562" ry="10.368411" x="152.76225" y="202.13995" style="fill:#c83737;fill-opacity:1;stroke:none" id="gaugeFrame"/> <rect width="263.09058" height="42.581127" rx="3.0565372" ry="3.414634" x="155.11697" y="392.35468" fill="#c83737" id="lowerLabelFrame" /> <rect width="241.79999" height="120.13961" x="164.2415" y="215.94714" style="fill:#ffeeaa" id="plotAreaBounds"/> <rect width="74.516975" height="44.101883" rx="2.6630435" ry="3.5365853" x="247.883" y="325.4415" style="fill:#ffd5d5;fill-opacity:1;stroke:none" id="indicatorBase"/> <rect width="6.0830183" height="98.849045" rx="2.6630435" ry="2.2987804" x="282.86035" y="237.23772" style="fill:#00aa00;fill-opacity:1;stroke:none" id="indicator"/> </svg> After creating the SVG file to be used to specify the custom shapes for your gauge, set the customShapesPath attribute to point to the file. To specify a custom shapes file: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Appearance attributes category of the Property Inspector, for the CustomShapesPath attribute, enter the path to the custom shapes file. For example, customShapesPath="/path/customShapesFile.svg". The custom shapes available to you support the following SVG features: Transformations Paths Basic shapes Fill and stroke painting Linear and radial gradients SVG features that are not supported by custom shapes in JDeveloper include: Unit Identifiers: All coordinates and lengths should be specified without the unit identifiers, and are assumed to be in pixels. The parser does not support unit identifiers, because the size of certain units can vary based on the display used. For example, an inch may correspond to different numbers of pixels on different displays. The only exceptions to this are gradient coordinates, which can be specified as percentages. Text: All text on the gauge is considered data, and should be specified through the tags or data binding. Specifying Paint: The supported options are none, 6-digit hexadecimal, and a <uri> reference to a gradient. Fill Properties: The fill-rule attribute is not supported. Stroke Properties: The stroke-linecap, stroke-linejoin, stroke-miterlimit, stroke-disarray, and stroke-opacity attributes are not supported. Linear Gradients and Radial Gradients: The gradientUnits, gradientTransform, spreadMethod, and xlink:href are not supported. Additionally, the r, fx, and fy attributes on the radial gradient are not supported. Elliptical Arc Out-of-Range Parameters: If rx, ry, and x-axis-rot are too small such that there is no solution, the ellipse should be scaled uniformly until there is exactly one solution. The SVG parser will not support this. General Error Conditions: The SVG input is expected to be well formed and without errors. The SVG parser will not perform any error checking or error recovery for incorrectly formed files, and it will stop parsing when it encounters an error in the file. In addition to the ability to specify custom shapes for gauges, there are a set of prebuilt custom shapes styles for use with the gauge components. The available styles are: Rounded rectangle Full circle Beveled circle Figure 25-11 shows a dial gauge displayed with each of the custom shapes styles applied. To apply a custom shapes style to a gauge: In the Structure window, right-click the dvt:gauge node and choose Go to Properties. In the Appearance attributes category of the Property Inspector, select the custom shapes style from the CustomShapesPath attribute dropdown list.
http://docs.oracle.com/cd/E15523_01/web.1111/b31973/dv_gauge.htm
CC-MAIN-2015-06
refinedweb
6,759
53.61
Cinder/VMwareVmdkDriver/vmdk-storage-policy-volume-type Contents Blueprint Status Blueprint is implemented and merged for icehouse-3. This includes the second phase of "auto discover of PBM wsdl files". So vmware_pbm_wsdl config is deprecated and not used. Requirements - Cinder VMDK driver when configured with a VC should allow specifying storage policies for a volume. When configured with ESX, the VMDK driver will ignore storage policy. - A new vmdk volume placement as well as moving existing vmdk should honour the storage profile - Note: There would be no way for the OpenStack user to verify compliance of the vmdk with its associated storage policy. Though the vmdk could go out of compliance sometime after placement due to various factors in the environment compliance reporting is not exposed to the OpenStack users/admins. - If there are no datastores that match the given storage profile then the placement should fail and hence the operation (volume attach) could fail. - Note that during a volume attach the VMDK driver can only try to move the vmdk file to a datastore that is visible from the VM instance. This can fail if none of the datastores visible to this VM instance satisfy/match the given storage profile. There could be other datastores in VC that potentially match the given storage profile but are not visible to the VM instance. There is no attempt made to migrate the VM instance in this case so that the volume attachment could succeed. - Cloning from a volume with an associated storage profile should carry forward the storage profile to the new target volume as well. - Cloning from a volume snapshot that has an associated storage profile should carry forward the storage profile to the new target volume as well. Admin workflow to setup the volume-type The storage admin first creates storage profiles in VC based on the storage vendor provided capabilities and/or tag based capabilities of the underlying storage infrastructure. Refer to to learn more about storage profiles on VC. Let us assume that the storage admin has partitioned the entire available storage into three categories of storage profiles named gold, silver and bronze. The admin now creates three corresponding volumes types on OpenStack referencing the three storage profiles. This is done as follows. $ cinder type-create gold $ cinder type-list +--------------------------------------+------+ | ID | Name | +--------------------------------------+------+ | 02e1ed45-4189-4649-b52c-2ec45e2804c1 | gold | +--------------------------------------+------+ $ cinder type-key 02e1ed45-4189-4649-b52c-2ec45e2804c1 set vmware:storage_profile=gold $ cinder extra-specs-list +--------------------------------------+------+-------------------------------------+ | ID | Name | extra_specs | +--------------------------------------+------+-------------------------------------+ | 02e1ed45-4189-4649-b52c-2ec45e2804c1 | gold | {u'vmware:storage_profile': u'gold'} | +--------------------------------------+------+-------------------------------------+ Similarly the admin sets up volume types for "silver" and "bronze" as well. Now a user has 3 volume types to choose from when creating a new cinder volume. User workflow to create and attach a volume associated with a storage profile A user can list all the available volume types and choose one of them when creating a new volume. # list volume types and create a volume based on a volume type $ cinder type-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 02e1ed45-4189-4649-b52c-2ec45e2804c1 | gold | | 3d5e187d-7f3b-4a88-8e4a-c55f6a9af54d | silver | | 7fcd4153-faf5-4f7d-a688-49b2c9119de4 | bronze | +--------------------------------------+--------+ $ cinder create --name vol1 --volume-type gold 1 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-12-02T04:48:54.000000 | | description | None | | id | 50855c59-ef81-47dc-9983-1edf95ab9804 | | metadata | {} | | name | vol1 | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | user_id | 2c5480d03a6849a29db9ec57e8927ac7 | | volume_type | gold | +-------------------+--------------------------------------+ $ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 50855c59-ef81-47dc-9983-1edf95ab9804 | available | vol1 | 1 | gold | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ # attach the volume to a running nova VM $ nova list +--------------------------------------+----------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------+ | 6dd8e9b0-4939-4295-8ef3-2a92ca1b4f55 | debianVM | ACTIVE | None | Running | private=10.0.0.2 | +--------------------------------------+----------+--------+------------+-------------+------------------+ $ nova volume-attach debianVM 50855c59-ef81-47dc-9983-1edf95ab9804 /dev/sdb +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/sdb | | id | 50855c59-ef81-47dc-9983-1edf95ab9804 | | serverId | 6dd8e9b0-4939-4295-8ef3-2a92ca1b4f55 | | volumeId | 50855c59-ef81-47dc-9983-1edf95ab9804 | +----------+--------------------------------------+ The VMDK driver will create the volume's vmdk file on a datastore that matches the given profile. Implementation - Cinder's existing volume type definition will be used to configure and hold the names of storage profiles under the namespace "vmware:storage_profile". - A new PBM client (similar to the existing VIM client) will be needed to make API calls to the PBM service. The existing VMwareAPISession will expand to accommodate api calls to PBM service through the pbm client object. - In the first phase, the PBM WSDL file needs to be manually downloaded and configured in cinder.conf. The second phase of implementation will package the PBM WSDL files necessary. See the #Configuration section below for details. - Datastore filtering when placing a new backing VM as well as when moving an existing backing VM will make use of PBM APIs to find out matching datastores. - Additional unit tests to cover new code. This will be written using Mock. Existing unit test changes will continue to be in mox. Configuration - Admin user should configure the name of the storage profile on a volume type under the key vmware:storage_profile. Look at Admin workflow section for the commands to do this. - PBM WSDL file configuration - If using code after first phase of delivery, the path to PBM WSDL file needs to be configured in the cinder-volume node within the /etc/cinder/cinder.conf file against vmware_pbm_wsdl config key. This will look like this if the vSphere Management SDK is downloaded and extracted to /opt on a Linux system. vmware_pbm_wsdl = "" On a Windows system this path could look like this if the vSphere Management SDK is downloaded and extracted within the Downloads folder. vmware_pbm_wsdl = "" - Auto discovery of PBM WSDL - The second phase of delivery will try to ease this manual step by packaging PBM WSDL files for vSphere version 5.5. So if configuring OpenStack Cinder with a vSphere version 5.5 or above then the admin has no manual steps involved. These PBM WSDL files are auto-discovered and used to invoke the PBM APIs. If Cinder is being configured with an older version of vSphere then PBM based placement is disabled. Default PBM Policy - When a volume is created either - without specifying a volume_type or - specifying a volume_type that does not have a "vmware:storage_profile" extra spec in it then the volume is created without any storage policy attached to it. The OpenStack admin has an option of configuring a default storage policy to be used in these cases. This is done by setting pbm_default_policy config key to a storage policy that is defined in vCenter. For example an entry in /etc/cinder/cinder.conf could look like this: pbm_default_policy=bronze
https://wiki.openstack.org/wiki/Cinder/VMwareVmdkDriver/vmdk-storage-policy-volume-type
CC-MAIN-2020-16
refinedweb
1,100
51.68
In this article I will discuss how to create Reminder in Windows Phone. I will also deep dive in few other aspects of Reminder feature in Windows Phone. Let's create a simple reminder. I will use timepicker control to set the reminder and activate it. Timepicker is available in Silverlight Windows Phone Toolkitkit" Step 4: Add timepicker control inside grid of MainPage.xaml. It has timePicker_ValueChanged which will trigger on change of time. < Step 5: Add Microsoft.Phone.Scheduler directive. using Microsoft.Phone.Scheduler; Step 6: In the code behind of MainPage.xaml add timePicker_ValueChanged method. private void timePicker_ValueChanged(object sender, EventArgs args){ string reminderName = Guid.NewGuid().ToString(); Reminder reminder = new Reminder(reminderName); reminder.Title = "My Reminder"; DateTime date = DateTime.Today; DateTime beginTime = date + ((DateTime)timePicker.Value).TimeOfDay; reminder.BeginTime = beginTime; //reminder.ExpirationTime = date.AddHours(20); reminder.RecurrenceType = RecurrenceInterval.Daily; ScheduledActionService.Add(reminder);} We can use Guid to generate random name of reminder. Begintime of reminder can be set using DateTime.Today and the value selected in timepicker. We can set RecurrenceType by using RecurrenceInterval enum. The options of enum are: None = 0 : No recurrence. The notification is launched once at the time specified by Microsoft.Phone.Scheduler.ScheduledAction.BeginTime Daily = 1: Daily recurrence Weekly = 2: Weekly recurrence Monthly = 3: Monthly recurrence EndOfMonth = 4: Recurring at the end of each month Yearly = 5: Yearly recurrence Note: If BeginTime is exceeded by more than 4 hours then Reminder won't be launched. Step 7: Now wrap the above code which loops for 50 times like below. private //reminder.ExpirationTime = date.AddHours(20); reminder.RecurrenceType = RecurrenceInterval.Daily; ScheduledActionService.Add(reminder); }} Step 8: Uninstall this app from emulator or device if you have already run it.Now run the program again with 50 loops and set the reminder. It will work fine. Now run the emulator again, you will notice that you will get InvalidOperationException was unhandled as shown below. BNS Error: The maximum number of ScheduledActions of this type have already been added. Step 9: Now uninstall the app with 50 loop and increase the loop count to 51 and run the app again. You will get the same above error again. So, conclusion is there is limit of 50 active reminders per application in Windows Phone. Once you add Reminder to ScheduledActionService, it remains in application until you remove it. As per above the Reminder will stop working once you reach threshold of 50. To make reminder work N number of times, the previous reminder(s) needs to be removed from ScheduledAction before setting new reminder. Step 10: Add System.IO.IsolatedStorage directive. using Step 11: Store the reminder name in the IsolatedStorage. Remove the reminder name from the ScheduledActionSerive before adding new one. IsolatedStorageSettings string reminderName = Guid.NewGuid().ToString();settings["Reminder"] = reminderName; Now one can set reminder N number of times. This ends the article of Reminder in Windows Phone.
http://dotnetarticle.com/DisplayArticle.aspx?ID=198
CC-MAIN-2021-04
refinedweb
481
51.65
Introduction C# is a language with the features of C++, programming style like Java, and the rapid application model of Basic. If you already know the C++ language, it will take you less than an hour to quickly go through the syntax of C#. Familiarity with Java will be a plus because the Java program structure, the concept of packages, and garbage collection will definitely help you learn C# more quickly. So while discussing C# language constructs, I will assume that you know C++. This article dicusses the C# language constructs and features, using code examples in a brief and comrehensive way so that you can, just by having a glance at the code, understand the concepts. The following topics of the C# langauge are discussed: - Program Structure - Namespaces - Data Types - Variables - Operators and Expressions - Enumerations - Statements - Classes and Structs - Modifiers - Properties - Interfaces - Function Parameters - Arrays - Indexers - Boxing and Unboxing - Delegates - Inheritance and Polymorphism The following topics are not discussed: - Things that are common in C++ and C# - Concepts such as garbage collection, threading, file processing, and so forth - Data type conversions - Exception handling - .NET library Program Structure Like C++, C# is case-sensitive. The semicolon, ;, is the statement separator. Unlike C++, there are no separate declaration (header) and implementation(cpp) files in C#. All code (class declaration and implementation) is placed in one file with a cs extention. Have a look at this Hello world program in C#. using System; namespace MyNameSpace { class HelloWorld { static void Main(string[] args) { Console.WriteLine ("Hello World"); } } } Everything in C# is packed into a class and classes in C# are packed into namespaces (just like files in a folder). Like C++, a main method is the entry point of your program. C++'s main function is called "main", whereas C#'s main function starts with a capital M and is named "Main". There is no need to put a semicolon after a class block or struct definition. It was required in C++, but not in C#. Namespace Every class is packaged into a namespace. Namespaces are exactly the same concept as in C++, but in C# we use namespaces more frequently than in C++. You can access a class in a namespace using dot . qualifier. MyNameSpace is the namespace in the hello world program above. Now, consider that you want to access the HelloWorld class from some other class in some other namespace. using System; namespace AnotherNameSpace { class AnotherClass { public void Func() { Console.WriteLine ("Hello World"); } } } Now, from your HelloWorld class, you can access it as: using System; using AnotherNameSpace; // you will add this using statement namespace MyNameSpace { class HelloWorld { static void Main(string[] args) { AnotherClass obj = new AnotherClass(); obj.Func(); } } } In the .NET library, System is the top-level namespace in which other namespaces exist. By default, there exists a global namespace, so a class defined outside a namespace goes directly into this global namespace; hence, you can access this class without any qualifier. you can also define nested namespaces. Using The #include directive is replaced with the "using" keyword, which is followed by a namespace name. Just as in "using System", above. "System" is the base-level namespace in which all other namespaces and classes are packed. The base class for all object is Object in the System namespace. Variables Variables in C# are almost the same as in C++, except for these differences: - Variables in C# (unlike C++) always need to be initialized before you access them; otherwise, you will get a compile time error. Hence, it's impossible to access an uninitialized variable. - You can't access a .dangling. pointer in C#. - An expression that indexes an array beyond its bounds is also not accessible. - There are no global variables or functions in C# and the behavior of globals is acheived through static functions and static variables. Data Types All types of C# are derived from a base-class object. There are two types of data types: - Basic/Built-in types - User-defined types Following is a table that lists built-in C# types: Note: Type range in C# and C++ are different; for example, long in C++ is 4 bytes, and in C# it is 8 bytes. Also, the bool type and string types are different than those in C++. Bool accepts only true and false and not any integer. User-defined types include: - Classes - Structs - Interfaces Memory allocation of the data types divides them into two types: - Value Types - Reference Types Value types Value types are those data types that are allocated in the stack. They include: - All basic or built-in types except strings - Structs - Enum types Reference type Reference types are allocated on the heap and are garbage collected when they are no longer being used. They are created using the new operator, and there is no delete operator for these types, unlike in C++, where the user has to explicitely delete the types created using the delete operator. In C#, they are automatically collected by the garbage collector. Reference types include: - Classes - Interfaces - Collection types such as arrays - String Enumeration Enumerations in C# are exactly like C++. They are defined through a enum keyword. Example: enum Weekdays { Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday, Friday } Classes and Structs Classes and structs are same as in C++, except in the difference of their memory allocation. Objects of classes are allocated in the heap, and are created using new, where as structs are allocated in the stack. Structs in C# are very light and fast datatypes. For heavy datatypes, you should create classes. Examples: struct Date { int day; int month; int year; } class Date { int day; int month; int year; string weekday; string monthName; public int GetDay() { return day; } public int GetMonth() { return month; } public int GetYear() { return year; } public void SetDay(int Day) { day = Day ; } public void SetMonth(int Month) { month = Month; } public void SetYear(int Year) { year = Year; } public bool IsLeapYear() { return (year/4 == 0); } public void SetDate (int day, int month, int year) { } ... } Properties If you familiar with the object-oriented way of C++, you must have an idea of properties. Properties in the above example of the Date class are day, month, and year for which in C++, you write Get and Set methods. C# provides a more convinient, simple, and straightforward way of accessing properties. So, the above class can be written as: using System; class Date { public int Day{ get { return day; } set { day = value; } } int day; public int Month{ get { return month; } set { month = value; } } int month; public int Year{ get { return year; } set { year = value; } } int year; public bool IsLeapYear(int year) { return year%4== 0 ? true: false; } public void SetDate (int day, int month, int year) { this.day = day; this.month = month; this.year = year; } } Here is the way you will get and set these properties: class User { public static void Main() { Date date = new Date(); date.Day = 27; date.Month = 6; date.Year = 2003; Console.WriteLine("Date: {0}/{1}/{2}", date.Day, date.Month, date.Year); } } Modifiers You must be aware of public, private, and protected modifers that are commonly used in C++. Here, I will discuss some new modifiers introduced by C#. readonly The readonly modifier is used only for the class data members. As the name indicates, the readonly data members can only be read once they are written either directly initializing them or assigning values to them in constructor. The difference between the readonly and const data members is that const requires you to initialize with the declaration, that is directly. See this example code: class MyClass { const int constInt = 100; //directly readonly int myInt = 5; //directly readonly int myInt2; public MyClass() { myInt2 = 8; //Indirectly } public Func() { myInt = 7; //Illegal Console.WriteLine(myInt2.ToString()); } } sealed the sealed modifier with a class doesn't let you derive any class from it. So, you use this sealed keyword for the classes that you don't want to be inherited from. sealed class CanNotbeTheParent { int a = 5; } unsafe You can define an unsafe context in C# by using an unsafe modifier. In the unsafe context, you can write an unsafe code; for example, C++ pointers and so forth. See the following code: public unsafe MyFunction( int * pInt, double* pDouble) { int* pAnotherInt = new int; *pAnotherInt = 10; pInt = pAnotherInt; ... *pDouble = 8.9; } Interfaces If you have an idea of the COM, you will immediately know what I am talking about. An interface is the abstract base class containing only the function signatures whose implementation is provided by the child class. In C#, you define such classes as interfaces using the interface keyword. .NET is based on such interfaces. In C#, where you can't use multiple class inheritance, which was previously allowed in C++, the essence of multiple inheritance is acheived through interfaces. That's how your child class may implement multiple interfaces. using System; interface myDrawing { int originx { get; set; } int originy { get; set; } void Draw(object shape); } class Shape: myDrawing { int OriX; int OriY; public int originx { get{ return OriX; } set{ OriX = value; } } public int originy { get{ return OriY; } set{ OriY = value; } } public void Draw(object shape) { ... // do something } // class's own method public void MoveShape(int newX, int newY) { ..... } } Arrays Arrays in C# are much better than in C++. Arrays are allocated in the heap and thus are of the reference type. You can't access an out-of-bound element in an array. So, C# prevents you from that type of bug. Also, some helper functions to iterate array elements are provided. foreach is the statement for such an iteration. The difference between the syntax of the C++ and C# array is: - The square brackets are placed after the type and not after the variable name. - You create element locations using new operator. C# supports single dimensional, multi dimensional, and jagged array (array of an array). Examples: int[] array = new int[10]; // single-dimensional // array of int for (int i = 0; i < array.Length; i++) array[i] = i; int[,] array2 = new int[5,10]; // 2-dimensional array // of int array2[1,2] = 5; int[,,] array3 = new int[5,10,5]; // 3-dimensional array // of int array3[0,2,4] = 9; int[][] arrayOfarray = = new int[2]; // Jagged array - // array of array of // int arrayOfarray[0] = new int[4]; arrayOfarray[0] = new int[] {1,2,15}; Indexers An indexer is used to write a method to access an element from a collection using the straight way of using [], like an array. All you need is to specify the index to access an instance or element. The syntax of an indexer is same as that of class properties, except they take the input parameter, that is the index of the element. Example: Note: CollectionBase is the library class used for making collections. List is the protected member of CollectionBase, which stores the collection list. class Shapes: CollectionBase { public void add(Shape shp) { List.Add(shp); } //indexer public Shape this[int index] { get { return (Shape) List[index]; } set { List[index] = value ; } } } Boxing/Unboxing The idea of boxing is new in C#. As mentioned above, all data types, built-in or user defined, are derived from a base class object in the System namespace. So, the packing of basic or primitive types into an object is called boxing, whereas the reverse of this known as unboxing. Example: class Test { static void Main() { int myInt = 12; object obj = myInt ; // boxing int myInt2 = (int) obj; // unboxing } } The example shows both boxing and unboxing. An int value can be converted to an object and back again to an int.. Function Parameters Parameters in C# are of three types: - By-Value/In parameters - By-Reference/In-Out parameters - Out pParameters If you have an idea of the COM interface and its parameters types, you will easily understand the C# parameter types. By-Value/In parameters The concept of value parameters is the same as in C++. The value of the passed value is copied into a location and is passed to the function. Example: SetDay(5); ... void SetDay(int day) { .... } By-Reference/In-Out Parameters The reference parameters in C++ are passed either through pointers or a reference operator, &. In C#, reference parameters are less error prone. Reference parameters are also called In-Out parameters because you pass a reference address of the location, so you pass an input value and get an output value from that function. You cannot pass an uninitialized reference parameter into a function. C# uses a keyword ref for the reference parameters. You also have to use keyword ref with an argument while passing it to a function-demanding reference parameter. Example: int a= 5; FunctionA(ref a); // use ref with argument or you will // get a compiler error Console.WriteLine(a); // prints 20 void FunctionA(ref int Val) { int x= Val; Val = x* 4; } Out parameter The Out parameter is the parameter that only returns a value from the function. The input value is not required. C# uses the keyword out for the out parameters Example: int Val; GetNodeValue(Val); bool GetNodeValue(out int Val) { Val = value; return true; } Variable number of parameters and arrays Arrays in C# are passed through a keyword params. An array-type parameter should always be the right-most argument of the function. Only one parameter can be of the array type. You can pass any number of elements as an argument of type of that array. You can better understand it from the following example. Example: void Func(params int[] array) { Console.WriteLine("number of elements {0}", array.Length); } Func(); // prints 0 Func(5); // prints 1 Func(7,9); // prints 2 Func(new int[] {3,8,10}); // prints 3 int[] array = new int[8] {1,3,4,5,5,6,7,5}; Func(array); // prints 8 Operators and Expressions Operators are exactly the same as om C++ and thus the expression, also. However, some new and useful operators have been added. Some of them are discussed here. is operator The is operator is used to check whether the operand types are equal or convertable. The is operator is particularly useful in the polymorphism scenarios. The is operator takes two operands and the result is a boolean. See the example: void function(object param) { if(param is ClassA) //do something else if(param is MyStruct) //do something } } as operator The as operator checks whether the type of the operands are convertable or equal (as is done by is operator) and if it is, the result is a converted or boxed object (if the operand can be boxed into the target type, see boxing/unboxing). If the objects are not convertable or boxable, the return is a null. Have a look at the example below to better understand the concept. Shape shp = new Shape(); Vehicle veh = shp as Vehicle; // result is null, types are not // convertable Circle cir = new Circle(); Shape shp = cir; Circle cir2 = shp as Circle; //will be converted object[] objects = new object[2]; objects[0] = "Aisha"; object[1] = new Shape(); string str; for(int i=0; i&< objects.Length; i++) { str = objects[i] as string; if(str == null) Console.WriteLine("can not be converted"); else Console.WriteLine("{0}",str); } Output: Aisha can not be converted Statements Statements in C# are just like in C++ except some additions of new statements and modifications in some statements. The following are new statements: foreach For iteration of collections, such as arrays, and so forth. Example: foreach (string s in array) Console.WriteLine(s); lock Used in threads for locking a block of code, making it a critical section. checked/unchecked The statements are for overflow checking in numeric operations. Example: int x = Int32.MaxValue; x++; // Overflow checked { x++; // Exception } unchecked { x++; // Overflow} } The following statements are modified: Switch The Switch statement is modified in C#. - Now, after executing a case statement, program flow cannot jump to the next case, which was previously allowed in C++. Example: int var = 100; switch (var) { case 100: Console.WriteLine("<Value is 100>"); // No break here case 200: Console.WriteLine("<Value is 200>"); break; } Output in C++: <Value is 100><Value is 200> In C#, you get a compile time error: error CS0163: Control cannot fall through from one case label ('case 100:') to another - However, you can do this similarly to the way you do it in C++: switch (var) { case 100: case 200: Console.WriteLine("100 or 200<VALUE is 200>"); break; } - You also can use constant variables for case values: Example: const string WeekEnd = "Sunday"; const string WeekDay1 = "Monday"; .... string WeekDay = Console.ReadLine(); switch (WeekDay ) { case WeekEnd: Console.WriteLine("It's weekend!!"); break; case WeekDay1: Console.WriteLine("It's Monday"); break; } Delegates Delegates let us store function references into a variable. In C++, this is like using and storing a function pointer for which we usually use typedef. Delegates are declared using a keyword delegate. Have a look at this example, and you will understand what delegates are: Example: delegate int Operation(int val1, int val2); public int Add(int val1, int val2) { return val1 + val2; } public int Subtract (int val1, int val2) { return val1- val2; } public void Perform() { Operation Oper; Console.WriteLine("Enter + or - "); string optor = Console.ReadLine(); Console.WriteLine("Enter 2 operands"); string opnd1 = Console.ReadLine(); string opnd2 = Console.ReadLine(); int val1 = Convert.ToInt32 (opnd1); int val2 = Convert.ToInt32 (opnd2); if (optor == "+") Oper = new Operation(Add); else Oper = new Operation(Subtract); Console.WriteLine(" Result = {0}", Oper(val1, val2)); } Inheritance and Polymorphism Only single inheritance is allowed in C#. Mutiple inheritance can be acheived by using interfaces. Example: class Parent{ } class Child : Parent Virtual Functions"); } } class Square : Rectangle { public override void Draw() { Console.WriteLine("Square.Draw"); } } class MainClass { static void Main(string[] args) { Shape[] shp = new Shape[3]; Rectangle rect = new Rectangle(); shp[0] = new Shape(); shp[1] = rect; shp[2] = new Square(); shp[0].Draw(); shp[1].Draw(); shp[2].Draw(); } } Output: Shape.Draw Rectangle.Draw Square.Draw Hiding parent functions using "new" You can define in a child class a new version of a function, hiding the one which is in base class. A new keyword is used to define a new version. Consider the example below, which is a modified version of the above example and note the output this time, when I replace the override keyword with a new keyword in the Rectangle class. class Shape { public virtual void Draw() { Console.WriteLine("Shape.Draw") ; } } class Rectangle : Shape { public new void Draw() { Console.WriteLine("Rectangle.Draw"); } } class Square : Rectangle { //wouldn't let you override it here public new void Draw() { Console.WriteLine("Square.Draw"); } } class MainClass { static void Main(string[] args) { Console.WriteLine("Using Polymorphism:"); Shape[] shp = new Shape[3]; Rectangle rect = new Rectangle(); shp[0] = new Shape(); shp[1] = rect; shp[2] = new Square(); shp[0].Draw(); shp[1].Draw(); shp[2].Draw(); Console.WriteLine("Using without Polymorphism:"); rect.Draw(); Square sqr = new Square(); sqr.Draw(); } } Output: Using Polymorphism Shape.Draw Shape.Draw Shape.Draw Using without Polymorphism: Rectangle.Draw Square.Draw The polymorphism doesn't take the Rectangle class's Draw method as a polymorphic form of the Shape's Draw method. Instead, it considers it a different method. So, to avoid the naming conflict between parent and child, we have used the new modifier. Note: You cannot use the two version of a method in the same class, one with new modifier and other with override or virtual. As in the above example, I cannot add another method named Draw in the Rectangle class which is a virtual or override method. Also, in the Square class, I can't override the virtual Draw method of the Shape class. Calling base class members If the child class has the data members with same name as that of base class, to avoid naming conflicts, base class data members and functions are accessed using a keyword base. See in the examples how the base class constructors are called and how the data members are used. public Child(int val) :base(val) { myVar = 5; base.myVar; } OR public Child(int val) { base(val); myVar = 5 ; base.myVar; } Future Additions: This article is just a quick overwiew of the C# language so that you can just become familiar with the langauge features. Although I have tried to discuss almost all the major concepts in C# in a brief and comprehensive way with code examples, I think there is lot much to be added and discussed. In the future, I would like to add more commands and concepts not yet discussed including events and so forth. I would also like to write about Windows programming using C# for beginners. References - our most commonly known MSDN - Inside C# by Tom Archer - A Programmer's Introduction to C# by Eric Gunnerson - Beginning C# by Karli Watson - Programming C# (O'Reilly) About the Author Aisha is a Master of Science in Computer Science from Quaid-i-Azam Univeristy. She has worked in VC++ 6, MFC, ATL, COM/DCOM, ActiveX, C++, SQL, and so forth. These days she is working on .NET framework and C#. Inspired with nature, she loves to seek knowledge. She is also fond of travelling. She keeps a free source code and articles Web site at. GoodPosted by Legacy on 02/24/2004 08:00am Originally posted by: Humayun Faiz Abbas Dear Aisha,Reply thank u very much for your efforts Too complicated for ME... gEEkSpEEK is sooooo boring.Posted by Legacy on 02/17/2004 08:00am Originally posted by: CodeGURL there is nothing to read here, WHY ARE YOU LOOKING???Reply Great Work. ... Needs your contributions to the topics which are not discussed in this articlePosted by Legacy on 11/27/2003 08:00am Originally posted by: vedhamanikandan Great examples.......Needs another version to describe rest of the topicsPosted by Legacy on 11/04/2003 08:00am Originally posted by: Rupa C# language is written in a very brief and specific manner which will be very helpful for any new developer to understand in a day or two. Examples itself is so explanatory that more theory is not necessary.Reply Good quick tutorialPosted by Legacy on 10/28/2003 08:00am Originally posted by: Łukasz Ślachciak This is very good turorial for fast acccomodation to C#. Especially I would like to recommend it for C++ programmers.Reply Very Good!Posted by Legacy on 10/13/2003 07:00am Originally posted by: Phanikumar The article gives a very good overview of C# for C++ programmers. Thanks to Aisha Ikram!! --Reply Phanikumar Great!Posted by Legacy on 09/09/2003 07:00am Originally posted by: Anna This is a great web sideReply Very Good!!! And the second part... When?Posted by Legacy on 07/07/2003 07:00am Originally posted by: Jorge Rojas Congratulations! Very good and clear. It's really a "quick" start, and you've done a good work not messing with provided classes and extras, just the language. I cannot wait for the second part!!! Keep the good work.Reply Programming Alam to the New C# ProgrammersPosted by Legacy on 07/03/2003 07:00am Originally posted by: Malik Asif Joyia A Very Nice work by youReply carry on and Keep it up Thanks Simply SuperbPosted by Legacy on 07/01/2003 07:00am Originally posted by: Jitendra This article is simply superb.Reply I liked it very much.
https://www.codeguru.com/csharp/csharp/cs_syntax/article.php/c5837/Quick-C.htm
CC-MAIN-2018-51
refinedweb
3,887
55.03
40 Joined Last visited Community Reputation307 Neutral About teccubus - RankMember teccubus replied to gamedevnoob's topic in For BeginnersThis article may be helpful: - [quote name='Aks9' timestamp='1356700288' post='5015056'] The specification is not a book, and should not be recommended for learning OpenGL! [/quote] O, rly? I learned OpenGL 4.3 entirely from specification. - The only book you need is GL 4.3 specification, from teccubus replied to Darego's topic in For BeginnersDarego: you just CAN'T make a MMOsomething. Writing coherent, stable, secure and efficient server for 1000+ players is very hard and expensive. teccubus replied to EduardoMoura's topic in For BeginnersI think it is entirely project-dependent. When you are evaluating project design issues, you have to make some choices - for example, what 3rd party components you will be reusing. If existing C# libraries satisfy your needs, then you should choose C#, because it's obviously easier to develop with this language. If there are no C# libraries you can reuse, you should choose C++ in that project. And, personally, I think that if C# is so great, then good libraries will show up sooner or later. But perhaps it is not that great... teccubus replied to ChainedHollow's topic in For Beginners[code] /* header.h */ #ifndef _HEADER_H_INCLUDED_ #define _HEADER_H_INCLUDED_ /* code goes here */ #endif [/code] teccubus replied to Millionaire's topic in For Beginners[quote name='Millionaire' timestamp='1355654760' post='5011221'] I was just wondering why would this user want to use a math engine written in C [/quote]Probably because doing low level math is faster in C than in Java. teccubus replied to Sugavanas's topic in For BeginnersHere you go: It's far better than tic-tac-toe. - [quote name='Revs' timestamp='1355578436' post='5010937'] But I already have other things to which I dedicate my devotion, where I do everything myself from scratch. So there's no time left for me to spend some more devotion on another thing ;) [/quote]And that's why you should forget about making games. teccubus replied to suliman's topic in For Beginners@ultramailman: This issue happens and I don't understand why. Object files should be recompiled whenever any of the dependencies is changed. So, why the are not? - Revs: no offence taken. Game development needs *real* devotion, especially if you want to do it alone. But your attitude is just opposite. - With your knowledge and attitude, it is infinitely hard. teccubus replied to Inuyashakagome16's topic in For BeginnersIf you can't remember method names, use IDE with autocompletion. teccubus replied to juur's topic in For Beginners[quote name='juur' timestamp='1355487196' post='5010574'] (NB: It requires Java to be installed) [/quote] People don't like java applets nowadays. teccubus replied to ChainedHollow's topic in For BeginnersJust read this: and start to code.
https://www.gamedev.net/profile/82811-j-evolas-apprentice/?tab=friends
CC-MAIN-2017-30
refinedweb
472
55.84
in reply to Re: Has anyone attempted to create a PHP to Perl converter?in thread Has anyone attempted to create a PHP to Perl converter? Do you see a way to deal with PHP-extensions written in C? I don't know much about either PHP nor XS but I would assume that without at least some database-module etc (probably written in C) such a port would not have much practical value and making a PHP-extension callable from Perl must be difficult or am I wrong? Most PHP extensions either dump a bunch of functions into the global namespace, or define a few classes. So to implement a PHP extension, you'd simply dump a few Perl or XS functions into the "PHP::GLOBAL" namespace, or define a few Perl classes Do you see a way to deal with PHP-extensions written in C? See PHP,,, Translating from one language to another is just PITA, especially if they have different semantics It never includes translating the libraries; parsing the language is always the easiest.
http://www.perlmonks.org/index.pl?node_id=1062363
CC-MAIN-2016-18
refinedweb
177
55.47
In the first installment of C is for Cocoa, we moved from one line of code to two lines of code, pointing out that the second one would be executed directly after the first, like items being crossed off of a to-do list. Our program knows to do the things we tell it to do in the order that we tell it. In lesson 2, we learned how to tweak this simple list approach by making single lines of code call functions where many more lines of code might be found. Our program knows that some of the things we tell it require more than one step. Although we didn't make note of it at the time, both of these demonstrate the same concept, which is the idea that we'll be tackling today, usually called the flow of control. In our first program, as each line of code was found in order, that line was where the code program "was", and it got executed before the program would move on to the next line. Once we added functions to the mix, the control could move "into" those, and again would take the code on one line at a time, making sure to execute every line of code before it was done. Related Reading Learning Cocoa with Objective-C By James Duncan Davidson, Apple Computer, Inc. But sometimes we don't want it to execute every line of code. Sometimes we want it to execute this line of code when the computer is connected to a network, and this line if it's not. Sometimes we want the computer to choose from a whole range of things to do, depending on one circumstance. Sometimes we want the program to do something a whole lot of times, like while the user is idle, or as many times as there were key presses. In all of these cases, we want to change the flow of control in our program. We want the computer to do something that's not just a simple list of directions, each to be done once and checked off, never to be looked at again. We want our program to do what makes sense given its circumstances, and not bother with the rest. This is what we'll be looking at in this lesson. But before we do that, we need to learn how to say no. When you ask a simple question, you expect a simple answer. But as we learned in lesson 1, you can't ask the computer a question, and your code certainly won't ask you any. Your program might ask the user questions, but the programmer is left in the position of uttering only imperatives forever and ever. The solution to all of this is borrowed from an old Irish schoolteacher named George Boole, who invented a way to express "yes" and "no" called boolean logic. It is traditional in C to refer to these as TRUE and FALSE (note the case: you must use all caps or it won't work), but Cocoa traditionally uses YES and NO. In this lesson, we'll use the C tradition because they stand out more in prose. Now you might think that the best way to do this would be to make a boolean type that handles TRUE/FALSE distinctions. And indeed in a perfect world that's exactly how this would be handled. In newer languages, that's what's done. In C, however, we cheat a little. Instead of actually making a type that handles this, we use a type we already have and pretend. So we take our familiar int and whenever it's a zero, we read that as FALSE; when it's not, we read that as TRUE. This might seem a little odd, but it actually leads to a number of simplifications later on, which we'll see in later lessons. But now let's head into this lesson and learn to control our flow. int The simplest type of flow control structure is the if block. It's so simple that you probably don't need much help dissecting the following code block: if int integerForSeedValue(int seedNumber) { int retval = 0; if (1) { printf("This is always true.\n"); retval = seedNumber - 3; } return retval; } You should use this and later examples in your code to modify the integerForSeedValue function we wrote in the last lesson. Each of the following code blocks will work and do something slightly different. As we discuss each, run your program with the newest code. integerForSeedValue I've added a declaration for the variable retval, which is the value that the function will return (this is a common name for such a variable). Also note that the variable is returned at the end of the function. The middle is where the new conditional is. retval Here we've added if, and then something in parentheses, and then a block. The key to it all is that something in the middle, in parentheses. That is called our condition, and if it's TRUE, the code in the block gets executed, and the function returns seedNumber -3. If it's not, then the computer skips the block entirely, and the function returns the 0 that retval is set to in its declaration. Since our condition here is the constant 1, the block is always executed. Let's make the code a little more interesting: seedNumber -3 0 int integerForSeedValue(int seedNumber) { int retval = 0; int cond = 17 < 23; if (cond) { printf("What do you know: 17 *is* less than 23!\n"); retval = seedNumber - 3; } return retval; } We added a line of code that seems familiar to us: it's a variable declaration that creates a variable named cond. But how it does that is a little new. Instead of the familiar arithmetic operators, we have the comparison operator "<", the less-than operator, which you remember from grade school when you were learning which numbers were bigger than others. Like all operators, it operates on the things around it. If it was an addition sign, it would add the two things around it, but since it's the less-than operator, it compares them. If the thing on the left is less than the thing on the right, it evaluates to a boolean value for TRUE. Since 17 is always less than 23, the first line sets cond to TRUE. Then, when we get to our if condition, since cond is true, the block is executed. If you switched the operator to be the greater-than operator ">", cond would be FALSE and the if block would not be executed. cond < > What would be executed in that case is the optional else block, which we can see in this code example: else int integerForSeedValue(int seedNumber) { int retval = 0; int cond = 17 > 23; //note we're now using the greater-than operator if (cond) { printf("What do you know: 17 *is* greater than 23!\n"); retval = seedNumber - 3; } else { printf("Well lookie here: 17 is *less than* 23!\n"); retval = seedNumber * 3; } return retval; } The else block will be executed whenever the if block is not executed--and vice versa. But note that neither of these need to be a block at all. You can just put a single line of code after the if's condition or the else, and that line of code will be executed in lieu of a block: int integerForSeedValue(int seedNumber) { int retval = 0; int cond = 17 > 23; //no printf's here; only one line statements if (cond) retval = seedNumber - 3; else retval = seedNumber * 3; return retval; } It's a good practice to always include the block, because leaving it out can lead to headaches later on if you want to add code to either side of the conditional and you forget to add the brackets then. You're usually better off making a habit of always putting the brackets in whether you need them right away or not. But it is often the case that the logic you need is more complex than a simple this or that choice. It's often this, or that, or this other thing. In that case, we chain our if/else statements together to allow more and more decisions, like this: int integerForSeedValue(int seedNumber) { int retval = 0; int cond = 17 > 23; if (cond) { printf("Amazing! 17 is *greater than* 23! My life has changed\n"); retval = seedNumber - 3; } else if ((seedNumber < 5) && (seedNumber > -5)) { printf("That there seed number is real close to zero!\n"); retval = seedNumber * 2; } else { printf("No special cases here; do what we normally do!\n"); return = seedNumber * 3; } return retval; } The second if statement is only tested if the first fails, which it always will since 17 is never greater than 23. But here we also do two important new things. First, our condition includes the function's input. This, as you can imagine, is much more useful than defining variables that will always be TRUE or FALSE as we have done before. Secondly, we did some craziness with some ampersands that's totally new. What we're doing here is making a compound condition. Now, instead of just checking one condition, we're checking two: we want to make sure seedNumber is less than 5 and greater than -5. That double-ampersand means boolean and. Why not a single ampersand? That's used for something else, which you won't need to care about for some time. seedNumber Go back over the examples of if/else blocks and try them in your program; each one works a little differently, so try each one if you haven't yet. In each example, watch what code does and does not get executed to see the flow of control. When you're done, we'll look at a different way to do the same type.
http://www.macdevcenter.com/pub/a/mac/2003/08/19/cocoa_series.html
CC-MAIN-2015-48
refinedweb
1,657
67.99
On Friday I made progress on 3 things: - getting my 5.8 kernel to boot faster in Firecracker - built some puzzle tarballs (which I’ll explain in a bit) - loaded the puzzle tarballs into my Firecracker VMs the mystery of the slow kernel boot I noticed that I had 2 pauses when I started my kernel with Firecracker (here’s the complete log). one for 0.3 seconds: [ 0.142205] i8042: If AUX port is really absent please use the 'i8042.noaux' option 2021-01-29T09:14:10.315589518 [anonymous-instance:WARN:src/devices/src/legacy/i8042.rs:126] Failed to trigger i8042 kbd interrupt (disabled by guest OS) [ 0.424261] serio: i8042 KBD port at 0x60,0x64 irq 1 and one for 0.5 seconds: [ 0.442595] Key type encrypted registered [ 0.936675] input: AT Raw Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 I asked about this in the Firecracker Slack and got a reply suggesting to use the i8042.noaux option for the first delay. (which I hadn’t noticed, even though it says it right there :)). That fixed it! I still don’t understand why the second delay is happening or how to fix it. Someone suggested it might be related to secure boot, but I spent a while poking at my kernel config and tried compiling with 6 different configurations and didn’t get anywhere, so I gave up for the day and moved onto something else. building tarballs of each puzzle I wrote a little Python script to do this. It basically runs a build.sh script that I write to build the puzzle inside a Docker container, and then makes a tarball of the Docker container’s current working directory. Basically I took advantage of the fact that I know how Docker uses overlayfs internally and just took a tarball of the upper part of the overlay directly. I did this because I couldn’t find a good way to do it using the normal Docker interfaces – I tried some things using a Docker experimental feature called docker build --squash but didn’t really get anywhere. import os import subprocess import json import time pwd = os.getcwd() container_id = subprocess.check_output(["docker", "run", "-v", f"{pwd}:/puzzle", "-td", "my-base-image", "/bin/bash"]) container_id = container_id.decode("utf-8").strip() container_json = subprocess.check_output(["docker", "inspect", container_id]) properties = json.loads(container_json) upperdir = properties[0]['GraphDriver']['Data']['UpperDir'] subprocess.check_call(["docker", "exec", container_id, "bash", "/puzzle/build.sh"]) subprocess.check_call(["sudo", "tar", "-C", upperdir, "--exclude=puzzle", "--xattrs", "-cf", "puzzle.tar", '.']) subprocess.check_call(["docker", "kill", container_id]) I then wrote a little Go function to load in these tarballs when I start a VM. It’s really dumb, it just mounts the image, extracts the tarball into the mounted directory, and unmounts. func (opts *options) copyPuzzleFiles(imagePath string) error { if opts.Request.Tarball == "" { return nil } mountDir := filepath.Join(ImageDir, pseudo_uuid()) err := os.Mkdir(mountDir, 0755) defer os.Remove(mountDir) if err != nil { return fmt.Errorf("Failed creating dir: %s", mountDir, err) } if err := exec.Command("mount", imagePath, mountDir).Run(); err != nil { return fmt.Errorf("Failed mounting path %s: %s", imagePath, err) } if err := exec.Command("tar", "-C", mountDir, "-xf", opts.Request.Tarball).Run(); err != nil { return fmt.Errorf("Failed to extract tarball %s: %s", opts.Request.Tarball, err) } if err := exec.Command("umount", mountDir).Run(); err != nil { return fmt.Errorf("Failed umounting path %s: %s", imagePath, err) } return nil } One thing I’ve been thinking about with code like this is whether it makes sense to shell out to tools or whether it would be better to – for example, Go has a tarball. Right now I don’t really see any benefit to using Go’s tar code because I feel like these command line tools (mount/umount/tar) are really a known quantity and if I tried to reimplement them in Go I would just write a buggy version that I’d then have to maintain. filesystems! I also had a really delightful conversation with my friend Dave about filesystems where we talked about my problems with using device mapper and how Firecracker doesn’t support qcow2 and ideas for different ways to get Firecracker to support overlay images. There’s an interesting discussion on the Firecracker GitHub about implementing a virtio backend to let the VM share a directory from the host.
https://jvns.ca/blog/2021/01/30/day-50--building-some-tarballs-for-puzzles/
CC-MAIN-2021-10
refinedweb
726
58.89
. Naturally, these properties hold for each step in the entire class hierarchy. When inverting this pattern, these properties are inverted as well. Hence, the parent class is responsible for determining if the child gets to go first, last or somewhere in the middle. Furthermore, the parent class gets to choose what to pass to the child class's method and what to do with whatever, if anything, the child class's method returns (see Figure 1). ____________ | A | ^ | |__________| | | | method() | | | |__________| | | | | | /_\ | | _____|______ | | | B | | | |__________| | | | method() | | | |__________| | | | | | /_\ | | _____|______ | | | C | | | |__________| | | | method() | | | |__________| | v Class Hierarchy Normal Extension Inverse Extension Method Invocation Figure 1. Compare Extension with Inverse Extension The fact that the parent class's method "wraps" the child class's method rather than the other way around is the whole motivation for this pattern. At first glance, it is natural to suggest using the Template Method pattern: in the parent class A's method "a", call the child class B's method "b". This indeed implements the wrapping behavior, but it is not scalable. For instance, if a grandchild class C is added, if b is to wrap C's method, C must add a method c. Creating a new name for each step in the hierarchy is nowhere near as elegant as simply calling "super", which you can do with normal extension. Furthermore, creating such names is not flexible. Suppose you wish to move C to be a subclass of A. Aside from changing the C's parent class, you also must rename its c method b. As your class hierarchies get larger, this irritation becomes worse. In traditional HTML, that is, in HTML where tables are used for layout, this pattern is quite helpful. The parent class is called the Layout. It has a method that lays out the page, deciding where the navigation should go and where the main content should go. The child class is called the Screen. It has a method that outputs the main content. Although the Screen inherits from the Layout, it is the Layout's method that takes control first. The Layout then passes control to the Screen's method--the opposite of "super"--when it is time for the Screen's method to do its work. To change the navigation used on a particular Screen--login pages and help pages usually look quite different from normal pages--simply change that Screen's parent class. To have all of the Screens in a section have their own sub-navigations, subclass the existing Layout with a new Layout and have the Screens subclass the new Layout. Applicability Inverse Extension can be used when the behavior of the parent's method should decorate the behavior of the child's method instead of the other way around, especially if this is true for more than two layers of the class hierarchy. You also can use inverse extension when a parent class method needs to control access to the method of its child class. Structure, Participants and Collaborations The structure of the class hierarchy is the same as it is for normal extension. One participant is the Parent (Layout), whose method wraps the child class's method. The other participant is the Child (Screen), whose method does the main task at hand, without having to worry about many of the details that the parent class's method handles for it. The Parent class's method decides when and if to forward the flow of control to the Child class's method. It optionally may perform additional operations before and after forwarding the flow of control. It may pass any argument it wants to the Child class's method and do anything it wants with whatever, if anything, the Child class's method returns. Consequences The Inverse Extension pattern has the following benefits and liabilities: - Changing the child class's parent class easily changes all of the code that wraps the child class's method. In the example above, changing a normal Screen's look-and-feel to match that of a help page involves simply changing the Screen's parent (a Layout). No code within the Screen itself needs to be changed. - The hierarchy is necessarily fixed. This pattern is not appropriate if you need to change the decorator dynamically. The Decorator pattern is more appropriate in that case. - It is hard to implement. If this pattern is not implemented by the programming language, it may be challenging to implement manually. Specifically, it requires a bit of meta-programming--navigating the inheritance hierarchy and dynamically looking up methods, through reflection. Implementation As mentioned above, implementing this pattern manually requires a bit of meta-programming: - You must be able to iterate over the class hierarchy. In languages such as Python that support multiple inheritance, I have found it helpful to constrain the iteration to the "leftmost", that is, the most primary, classes in the hierarchy, assuming that the others probably would be mixins. - While iterating over the class hierarchy, you must be able to save a reference to a method of a given name. That is, given an instance "obj" and a method "m", the goal is to loop over the classes in obj's class hierarchy and find all the classes that define m, keeping a list of references to those methods. - You must have a way for the parent class's method to call the child class's method. Having created the list of methods named m above, the parent class's method must be able to call the next m in the list. - It may be irritating to have a child class's method signature be constrained by a parent class's method signature. Suppose a parent class, A, has two child classes, B and C. A is generic, whereas B and C are more specific. Let's apply Inverse Extension to the method m. B.m may wish to receive one argument whereas C.m wishes to receive two arguments. In normal extension, calling B.m or C.m with a differing number of arguments is no problem. To maintain this flexibility for Inverse Extension, it is important that B.m and C.m be able to have signatures that are different from each other and from A.m. To implement this, a "varargs" feature in the language is helpful. It often is appropriate for the parent class's method to accept an arbitrary number of arguments and simply pass them unmodified to the child class's method. - Should the caller know Inverse Extension is happening? A coworker, David Veach, noticed that in normal extension, calling obj.m() hides whether extension is happening in m. It often makes sense to observe this constraint when applying Inverse Extension. Sample Code Python's dynamic nature makes implementing Inverse Extension straightforward. In consideration of the points above, it isn't difficult to loop over an object's class hierarchy, thanks to the __bases__ attribute. Nor is it hard to look up a method of a given name in each class, thanks to "getattr". When calling a parent class's method m, a function named callNext is passed as the first argument. callNext can be invoked anywhere within the method to transfer control to the child class. callNext is implemented using a closure, containing references to the inheritance hierarchy. In Java, an Iterator offers a similar mechanism. Python also supports a varargs feature, hence the child class's method need not be constrained by the parent class's method. In the sample code I have provided, in the interest of simplicity, I have not tried to hide the Inverse Extension pattern from the caller. Nonetheless, hiding the implementation of a private method that uses Inverse Extension behind the API of a public method is a trivial matter. Listing 1 contains the sample implementation in Python. Anthony Eden helped me to create a similar Java implementation, InverseExtend.java, which is shown as Listing 3 at the end of this article. Listing 1. Sample Code in Python, callNext, *args, **kargs) However, the lowest level class's method has no callNext parameter, since it has no one else to call: Class.method(object, *args, **kargs) In the method: callNext(*args, **kargs) should be called when it is time to transfer control to the subclass. This may even be in the middle of the method. Naturally, you don't have to pass *args, **kargs, but a common idiom is for the parent class to just receive *args and **kargs and pass them on unmodified. """ #.insert(0, last) def callNext(*args, **kargs): """This closure is like super(), but it calls the subclass's method.""" method = methods.pop() if len(methods): return method(obj, callNext, *args, **kargs) else: return method(obj, *args, **kargs) return callNext(*args, **kargs) # Test out the code. if __name__ == "__main__": from cStringIO import StringIO class A: def f(self, callNext, count): buf.write('<A count="%s">\n' % count) callNext(count + 1) buf.write('</A>') class B(A): # I don't have an f method, so you can skip me. pass class C(B): def f(self, callNext, count): buf.write(' <C count="%s">\n' % count) callNext() Sidebar: Implementing the Pattern Using Python's New Generators For fans of functional programming, serious Python hackers and other hardcore engineers who think the callNext parameter is inelegant, I offer an alternate Python implementation of this pattern using Python's new generators. Listing 2 uses generators for non-local flow of control. No callNext parameter is needed. Each parent class method does a yield *args, **kargs when it is time to call the child class's method. In fact, this implementation flattens the recursion down to a loop, despite the fact that the code is not tail recursive. In this way, it is similar to a continuation. The one drawback of this implementation is it is not possible to maintain the return value semantics of the earlier implementation. That is, it is not possible for each child class method to return a value to its parent class method because of the nature of generators. Nonetheless, I offer Listing 2 as an intriguing use of generators. Listing 2. Generator Code for Non-Local Flow of Control import types, *args, **kargs) Each parent class method *must* be a generator with exactly one yield statement (even if the yield statement never actually gets called), but the lowest level class method must *not* be a generator. In the parent class: yield args, kargs should be called when it is time to transfer control to the subclass. This may be in the middle of the method or not at all if the parent class does not wish for the child class's method to get a chance to run. """ #.append(last) # Traverse down the class hierarchy. Watch out for StopIteration's which # signify that the parent does not wish to call the child class's method. # generatorMethods maps generators to methods which we'll need for nice # error messages. generators = [] generatorMethods = {} for method in methods[:-1]: generator = method(obj, *args, **kargs) assert isinstance(generator, types.GeneratorType), \ "%s must be a generator" % `method` try: (args, kargs) = generator.next() except StopIteration: break generators.insert(0, generator) generatorMethods[generator] = method # If we didn't have to break, then the lowest level class's method gets to # run. else: method = methods[-1] ret = method(obj, *args, **kargs) assert not isinstance(ret, types.GeneratorType), \ "%s must not be a generator" % method # Traverse back up the class hierarchy. We should get StopIteration's at # every step. for generator in generators: try: generator.next() raise AssertionError("%s has more than one yield statement" % `generatorMethods[generator]`) except StopIteration: pass # Test out the code. if __name__ == "__main__": from cStringIO import StringIO class A: def f(self, count): buf.write('<A count="%s">\n' % count) yield (count + 1,), {} buf.write('</A>') class B(A): # I don't have an f method, so you can skip me. pass class C(B): def f(self, count): buf.write(' <C count="%s">\n' % count) yield () Known Uses Perl Mason uses the Inverse Extension pattern as a key component of its templating capabilities. A parent template can provide a common look-and-feel for each of its child templates. Naturally, a child template has no idea what part of the layout it belongs in, so it is necessary for the parent template to call the child template with the call $m->callnext when it is time to produce its output. Perl Mason was the inspiration for the same feature in my Python Web application framework Aquarium, as well as for this article. Related Patterns The Template Method pattern is similar to the Inverse Extension pattern when there are only two levels in the class hierarchy. Each involves a parent class method calling a child class method. However, as mentioned above, the Inverse Extension pattern is more appropriate when the Template Method pattern needs to be applied recursively--when there is an arbitrary and varying number of classes in the class hierarchy. If the class hierarchy is deep and volatile, creating a new method name at each step, which is required by the Template Method pattern, is not scalable. The Decorator pattern is similar to the Inverse Extension pattern, because the decorating class's method calls the decorated class's method. However, the Decorator pattern is applied at runtime instead of being based on the class hierarchy. If you need to wait until runtime in order to associate which objects get to decorate which other objects, use the Decorator pattern to apply the decorators dynamically. If you want the decorators to be applied automatically based on the class hierarchy, use the Inverse Extension pattern. Listing 3. Sample Code in Java Listing 3. import java.lang.reflect.Method; import java.util.ArrayList; import java.util.List; /** * @author Anthony Eden, Shannon Behrens */ /** * This class has a static method, <code>inverseExtend</code>, that implements * the Inverse Extension design pattern. */ public class InverseExtend { private Object obj; private String methodName; private Object arg; private List methods; /** Just accept the parameters. */ private InverseExtend(Object obj, String methodName, Object arg) { this.obj = obj; this.methodName = methodName; this.arg = arg; } /** * Iterate downward through a hierarchy calling a method at each step. * * @param obj the object to pass to the method * @param methodName the name of the method to call * @param arg the argument to pass to the method * * The method should have a signature something like: * * <code>public static Object f(InverseExtend inverseExtend, * Object arg)</code> * * (The method must be static because non-static methods are always virtual * and it's not possible to call an overriden virtual method. If you wish * to refer to a particular instance, you must pass it using the arg.) * * Within the method: * * <code>inverseExtend.next(arg)</code> * * should be called when it is time to transfer control to the subclass. * This may even be in the middle of the method. */ public static Object inverseExtend(Object obj, String methodName, Object arg) { InverseExtend me = new InverseExtend(obj, methodName, arg); // Figure out the classes in the class hierarchy. "classes" will // contain the most senior classes first. List classes = new ArrayList(); Class c = obj.getClass(); classes.add(c); while ((c = c.getSuperclass()) != null) { classes.add(0, c); } // Skip classes that don't define the method. Be careful--getMethod() // will search parent classes for the method. me.methods = new ArrayList(); Method last = null; Class[] signature = { me.getClass(), (new Object()).getClass() }; for (int i = 0; i < classes.size(); i++) { c = (Class) classes.get(i); try { Method m = c.getMethod(me.methodName, signature); if (!m.equals(last)) { last = m; me.methods.add(0, last); } } catch (NoSuchMethodException e) { // Don't worry yet. Someone else might have the method. } } // Now it's time to worry if me.methods is still empty. if (me.methods.size() == 0) { throw new RuntimeException( new NoSuchMethodException(me.methodName)); } return me.next(arg); } /** * Call the subclass's method. If there is no subclass or if any other * exception is thrown, raise a RuntimeException. * * @param arg pass this to the subclass's method * @return whatever it returns */ public Object next(Object arg) { if (methods.size() == 0) { throw new RuntimeException("Stack underflow."); } Method method = (Method) methods.get(methods.size() - 1); methods.remove(methods.size() - 1); Object args[] = { this, arg }; try { return method.invoke(null, args); } catch (Exception e) { throw new RuntimeException(e); } } /* * Everything below is used to show the code in action. Please excuse the * sloppiness and redundancy ;) I'm using static inner classes so that the * test cases are near the code being tested. */ public static void main(String[] args) { String\n" + " <C count=\"1\">\n" + " <D count=\"2\" />\n" + " </C>\n" + "</A>\n"; Argument argument = new Argument(); argument.buf = new StringBuffer(); argument.count = 0; inverseExtend(new D(), "f", argument); if (argument.buf.toString().equals(expected)) { System.out.println("InverseExtend test passed."); } else { System.out.println("InverseExtend test failed."); System.out.println("Expected:\n" + expected); System.out.println("Got:\n" + argument.buf.toString()); } } private static class Argument { public StringBuffer buf; public int count; } private static class A { public static Object f(InverseExtend inverseExtend, Object arg) { Argument argument = (Argument) arg; argument.buf.append("<A count=\"" + argument.count + "\">\n"); argument.count++; inverseExtend.next(argument); argument.buf.append("</A>\n"); return null; } } private static class B extends A { /* I don't have an f method, so you can skip me. */ } private static class C extends B { public static Object f(InverseExtend inverseExtend, Object arg) { Argument argument = (Argument) arg; argument.buf.append(" <C count=\"" + argument.count + "\">\n"); argument.count++; inverseExtend.next(argument); argument.buf.append(" </C>\n"); return null; } } private static class D extends C { public static Object f(InverseExtend inverseExtend, Object arg) { Argument argument = (Argument) arg; argument.buf.append(" <D count=\"" + argument.count + "\" />\n"); return null; } } } Acknowledgments Thanks go to Anthony Eden, Brandon L. Golm and Kyle VanderBeek for reviewing the article and/or source code. Resources Gamma, Erich, et. al. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, ISBN 0-201-63361-2. Shannon Behrens is a self-professed language lawyer who works for Iron Port Systems in San Bruno, California. His eventual goal is to implement a Python-like systems language and then develop a practice kernel in that... inner keyword Guido van Rossum said: Regarding BETA having the same feature: it probably inherits this from Simula-67; ISTR that it has "inner" as a keyword to indicate that a subclass's code should run. See e.g. and search for Inner. But this appears to be before methods were invented. (Being Scandinavian there's no way that BETA would *not* be a descendant of Simula-67. :-) the world turned upside down Thanks for an informative and interesting take on class hierarchies. At Open List, we rely heavily on a pipeline architecture for data processing. We used to use a pretty standard, 3-layer class hierarchy, where each type of object to be fed through the pipeline was instantiated at the lowest level of the chain, selectively calling up to higher versions of methods. What we realized, however, was that in order to add new types of objects to the pipeline, we had to define bottom-level classes that would either a) defer to parent methods, b) stand in lieu of them, or c) do their own work in combination with calls to SUPER (parent) methods. The problem with this approach is that every new source required adding code at the bottom layer, even if it defered completely to higher-level classes to do the work. In other words, the traditional class heiracrhy expected us to labor in the realm of the specific even if all we needed was a call to the general. To rectify it, we developed a base class that walks down, as opposed to up the inheritance tree. Generic (higher-level) classes are called first in the pipeline. These look up to two layers down, defering to any more specific classes that exist. And, because these lower-level classes inherit from their higher level classes, we can still take advantage of a) overriding methods in parent classes, and b) calling parent methods by way of SUPER. The difference is that we can now process any new type of data without having to write stub classes for it. The order of preference is to start with parents, defer to children only when they exist, and then to let the children selectively leverage their parent's abilities. We can now add data sources in about 1/10 the time, still take advantage of using hierarchy to handle conditional pipeline processing, and we have also greatly reduced the number of bugs, volume of code, etc. I realize this is only a partial analogue to what's detailed in the article, but seeing the world turned upside down can have its advantages! If it weren't for the "Design Pattern" wording... ..., I wouldn't comment on this. However (putting aside many other issues with this article): - it is a language-dependent hack; - it suggests a hard way for solving a non-existing problem. That’s why I wouldn’t be so bold as to call it “pattern If it weren't for the "Design Pattern" wording... > If it weren't for the "Design Pattern" wording... > > ..., I wouldn't comment on this. > > However (putting aside many other issues with this article): > > - it is a language-dependent hack; If we were C programmers and this were 30 years ago, I think you could say the same thing about OOP in general ;) (and in particular, vtables in C). The basic idea is sound even if it is difficult to implement in various languages. Furthermore, to quote: "The first time, it's a hack. The second time it's a trick. The third time, it's a well established principle." Considering the number of places I've used and seen this pattern, I'm definitely at the "well established principle stage" ;) > - it suggests a hard way for solving a non-existing problem. Well, I'm not arguing that you have this problem, but I definitely do. > That’s why I wouldn’t be so bold as to call it “pattern Inheritance gone horribly wrong The message doesn't seem to go through to you. Sorry to say that the initial design is badly broken. "Inverse Extension" "fixes" this broken design. Hence Mabooka-Mabooka's "hack" wording. It is an example of fixing a problem by adding additional complexity rather than rectifying the starting point. Ask yourself the questions "Is an application a kind of a layout?". (Likewise for the other classes.) The answer is no. Application should thus not inherit from Layout. An application *has* pages. A page *has* a layout. Note the *has*, no *is a refinement of* there.. *has*. Hence composition should be used, not inheritance. Think of something like the following (Content being a (base)class that encapsulates the different content types (html, xml etc): abstract class Page { // .. Content getContent() { // .. Layout layout = this.getLayout(); // .. } protected Layout getLayout() { // The default layout is CompleteLayout(); return CompleteLayout(); } // .. } Need a page that has a layout that is different layout? Override getLayout(): final class LoginPage extends Page { // .. protected Layout getLayout() { return SparseLayout(); } // .. } No need for a "deep class hierarchy." No need for a "volatile class hierarchy": override getLayout() to achieve a certain layout for XYZPage, rather than shoving XYZPage around in the class hierarchy. No need for inverse extension. Your class relationships are inside out, upside down. Hence your need for a bizarre, upside down, construct like having a baseclass extend its subclasses. "Inverse Extension" maybe used quite often. That does not imply that it qualifies as a design pattern. Authors that need it should fix their designs. has-a rather than is-a [Hmm, I replied earlier, but my reply got dropped. I wonder if there is a limit to how deep threads can go.] Ok, let's be nice to one another :) It'll make this conversation more enjoyable for both of us, and it'll stimulate open minds. below (see HEADS UP below), I have multiple layers. Does each layer have-a or is-a?? Best Regards, -jj Re: has-a rather than is-a > Ok, let's be nice to one another :) It'll make this conversation more enjoyable for both of us, and it'll stimulate open minds. No offense intended. Don't take this personally, we are discussing the merits of a technical solution, not you. > > Sounds like subclassing. > below (see HEADS UP below), I have multiple layers. Does each layer have-a or is-a? Why not have a LayerLayout class, an instance of which accepts the stuff (Components, see below) construction time and puts the stuff into layers? > >? Unfortunately, you phrased your questions in terms of changes you make to your code, rather than in terms of what you want to achieve, so I'll have to do a little guessing as to what functionality you need. BTW, this begs the question what you do when you have two screens, say a news page and a faq page, in your application that have identical layout. Do you introduce two classes NewsScreen and FAQScreen, the first overriding CssAndJavaScript.getTitle to return "News" and the second overriding CssAndJavaScript.getTitle to return "FAQ"? Do you need to override all these methods to finetune the layout of a page? Seems so. The getTitle thing to me does sound like an atribute of a screen: 'Screen = new SomeScreen("This is the page's title");'. The Sparse.doHelpBar does sound like an boolean attribute to the Sparse class that tells in instance of the class to display the HelpBar or not. Might also be separate layout subclasses, one taking a HelpBar, the other not. Can't tell at this point. Why do you want to override Bare.getHeaders and other methods? Finetuning? What do they do? You seem to have a need to recursively wrap parts that make up a page into html; have tables inserted into table cells, into layers etc, etc. Am I right? This sounds like the Composite Design Pattern. Why not introduce a class that represents the parts that make up a screen/page: abstract class Component, say, having two, concrete subclasses, SimpleComponent (the actual content) and CompositeComponent. The latter *having* Components. A Component does have a Layout. Problem solved. No need for 'many delegation methods': simple recursion does the trick. Has-a rather than is-a indeed, I think. Correction.. ..should read 'return new CompleteLayout();' and 'return new SparseLayout();' Interesting and neat code. Interesting and neat code. It took me a little bit to get into the article. I only gave it a shot because I was curious how you did this with Python, but I wasn't sure at first why I should WANT to do this. The topic is a little abstract. You should lead with a practical example for motivation. The HTML example was a little fuzzy. I still had trouble disambiguating this from the Template pattern. Interesting and neat code. > I wasn't sure at first why I should WANT to do this. Yes, I definitely failed to explain this. See below where I try to do better. Thanks! see below See the "HEADS UP" below. :) What class of problems does Inverse Extension solve? It is unclear to me what class of problems are solved by the Inverse Extension pattern. I fully appreciate the fact that Inverse Extension is like the Template Pattern. In the Template Pattern one has subsequently refining methods that have different names and, in general, varying signatures as one goes down the inheritance hierarchy. The article seems to imply that the Inverse Extension pattern apllies to a class hierarchy in which the refining methods, so te speak, all have the same name and signature. I do not understand what class of problems would have this look. What I miss in the article is the following - An example of a class hierarchy implemented without Inverse Extension. - An exposition of the problems it has. - An refactoring of the hierarchy with Inverse Extension. - An explanation of how Inverse Extension fixes the problems. Could you give such an example? What class of problems does Inverse Extension solve? Hmm, sorry, I guess the paragraph that starts with "In traditional HTML, that is," didn't come out as clearly as I had hoped. As I mentioned, I first saw this pattern in Mason (the Perl templating engine) where it is a key component. I ported it to Python in my project, Aquarium, and it is a key part of Aquarium. Imagine you have a common look and feel for all the Web apps at a company. Each application at the company subclasses the Layout class from the common look and feel and adds its own stuff. Then each section within that app might add its own stuff. Then each individual page adds its own stuff. You have four layers in the inheritance hierarchy. Suppose at each layer, you're using a table to wrap the lower layer's content. It's like an onion. Now, you could create a new method at each layer: doCommonLookAndFeel, doApp, doSection, doScreen, where each method in the parent class calls the child class's method. However, what happens if you suddenly need to shove in a layer, for instance, you need to introduce a layer that is a parent of the sections. You end up renaming methods, etc., and it all gets really sticky. Consider what happens if you want some pages to subclass layers that are higher up. Suddenly, you have a page defining the doSection method in order to call its own doScreen method. Ugly. With inverseExtend, you can add as many layers to the onion as you want, and you can shove in new layers in the middle of the onion if you want. It all works out. In my case, each layer has a __call__ method (which is Pythonic), and you can rearrange things all over the place just by changing the parent class. By the way, this isn't some academic exercise. I'm actually the "UI Platform Engineer" at my company, and I maintain the shared look and feel as well as the framework across all the Web UI's at my company. The whole onion idea is something I really deal with. Well, I hope that helps. Thanks for your reply. It Thanks for your reply.? Do I understand you correctly that the hierarchy is as follows, :- indacting subcalssing, Layout :- Application :- Section :- Page? Are Screens and Pages synonyms? If not, where does Screen fit into this model? What is a Section ? I.e. what does it represent? How do these classes relate? What is the behaviour? I.e. what methods are involved, what do they do and how do they call eachother? What is its public interface? What is its context? How does a client use an Layout instance? How is a page ulimatally displayed? What do the doCommonLookAndFeel, doApp, doSection, doScreen do? Return html? How many sections does an application have? How many pages does a section have? Why would it be needed to suddenly shove in a layer? Why would one need some pages to subclass layers that are higher up? Where does navigation fit into this model? HEADS UP: big, concrete example of my usage of this pattern Ok, I replied, but it was too big, and the system won't let me post it. Hence, I've posted it on my blog. Please see comment 2 here. I lay out the concrete class hierarchy of how I've actually used this pattern. I hope everyone who still has questions reads this. Thanks! Thanks for your reply. It > Thanks for your reply. Thanks for your patience as I try to explain what I'm talking about. >? Yes, a Layout does encapsulate the common look and feel. By "common look and feel", I mean that all of our apps look and behave pretty much the same. In my code, I've used the term Layout pretty much as an interface. > Do I understand you correctly that the hierarchy is as follows, :- indacting subcalssing, Layout :- Application :- Section :- Page? Well let me step away from the hypothetical and talk about an actual class hierarchy (top to bottom). Screens are the main content. Layouts are pretty much any parent of a screen. I'll start with the layouts, and at the bottom are the screens. In Aquarium, which is my open source framework: Bare -- This class understands content types, HTTP headers (including turning on or off HTTP caching), etc. Some subclasses in your app might include a screen that generates an XML report or a screen that generates a dynamic image containing a graph. HTML -- This is mostly for people who want to type the HEAD, HTML, BODY, etc. tags themselves. It knows that it has content type text/html. CssAndJavaScript -- This class knows about the basic layout of HTML. It has methods for the title, CSS, JavaScript, etc. that can be overriden. It uses a bunch of defaults that are defined in a properties file. It's here so that I don't have to duplicate boilerplate. Simple screens supclass from CssAndJavaScript when I'm trying to get a message out without caring about look and feel. Consider the screens that Apache uses for 404, etc. Now, from within my common look and feel code, which is used by multiple apps at my company: Sparse -- This is the layout used for content where you don't want to show the tabs. The login screen, the help screens, the app-specific 404 screen, etc. all extend from here. Complete -- This layout contains the complete navigation, including tabs. Most of the screens in the application extend from this. Most of apps that use my common look and feel use this as their main layout. Cluster -- This layout knows about extra navigation needed for clustering (which we've unfortunately used as a misnomer for centralized management). Only some of my apps need to worry about clustering. In those apps, screens use this layout instead of the Complete layout. Now, for actual screens: various screens -- At the lowest level are the various screens. The basic idea is that you start a screen by picking how much you want your parent class to do for you automatically. You can always override methods defined in your parent classes that it sets up for you to override, such as getTitle. You usually try to pick the most specialized layout (e.g. Cluster or Complete depending on whether or not your app supports clustering), however, certain screens need to assert more control. For instance, it makes no sense for an XML report to subclass the Cluster layout. > Are Screens and Pages synonyms? Yes, sorry about that. Screen is the actual "interface". I used page as a synonym although there's nothing actually called page in my code. > If not, where does Screen fit into this model? > > What is a Section ? I.e. what does it represent? Now that I've laid out a concrete class hierarchy, let's just drop the hypothetical stuff. > How do these classes relate? All those classes above extend from one another. > What is the behaviour? Based on inverse extend, each higher level class's __call__ method gets called. Whenever it's ready, it passes off control to the child class's __call__ method. Hence, the parent classes "wrap" the child classes. This really makes sense in HTML. (It's also useful in a couple other contexts. Imagine a parent class that checks access restrictions beforing allowing access to the child class. Note, the parent class can prevent the child class's __call__ method from running at all by simply refusing to call callNext! I have found this to be a useful technique.) > I.e. what methods are involved, Everyone has a __call__ method (which is a Pythonism). > what do they do and how do they call eachother? Layouts look like (pseudocode): class CssAndJavaScript(HTML): def __call__(self, callNext): # do stuff, like include the navigation callNext(*args, **kargs) # Let the subclass do its stuff. # do more stuff, like include the copyright footer Screens look like (pseudocode): class MyScreen(CssAndJavaScript): def __call__(self): # Do my main content > What is its public interface? In Aquarium, it's easy. Everyone has a __call__ method. > What is its context? I have the notion of a context, but I'm pretty sure it doesn't match what you're asking about. The important thing is that the child class's __call__ method is called by the parent class's __call__ method in order to do its work at just the right time. > How does a client use an Layout instance? How is a page ulimatally displayed? Well, Aquarium uses Cheetah for this. Here's a simple screen: | #extend aquarium.layout.Cluster | #implements __call__ | | Hi! I'm a simple looking screen, but my parent automatically takes care of | outputing all the common look and feel! When you load a URL in the browser, Aquarium maps the URL to a particular screen. Then it calls inverseExtend on that screen's __call__ method. Hence, the parent class gets to do work, and then it passes control down the inheritance hierarchy, until finally the screen gets to output its main content. > What do the doCommonLookAndFeel, doApp, doSection, doScreen do? Dropping the hypothetical names. > Return html? Yes, actually they do. Hence, the parent class can take that HTML and do whatever it wants with it. > How many sections does an application have? You saw how many layouts I have. In practice, each application might add maybe one additional layout on an as needed basis. Each app might have up to about 100 screens. > How many pages does a section have? Dropping the hypothetical names. > Why would it be needed to suddenly shove in a layer? I've had to do this twice so far. 1. My buddy told me that the CssAndJavaScript class was stupid. He wanted to output the HTML boilerplate manually. Hence, I slipped in the HTML class between Bare and CssAndJavaScript to stop his complaining. 2. When we added centralized management, I added a subclass of Complete called Cluster. I had to change all the screens to subclass Cluster instead of Complete. Note, because I was using inverse extend instead of the template design pattern, I only had to change one line, the name of the superclass, instead of having to change method names as well (you need multiple method names for multiple layers of the template design pattern). It's nice because they all have the one method, __call__, and the only thing that matters is your parent class. > Why would one need some pages to subclass layers that are higher up? Basically, a screen picks the parent class that provides the behavior it wants. An XML report doesn't want any HTML at all. The login page doesn't need navigational tabs. The generic 404 page that comes with Aquarium subclasses CssAndJavaScript because it doesn't know anything about your app-specific layouts. > Where does navigation fit into this model? Layout classes do something like a server side include at the appropriate place to incorporate the navigation. A layout class could have the navigation inline, but I don't do that because I may need to reuse the same navigation class from two different layouts. Hence, a layout may "include" a bunch of navigation classes and call its screen subclass (i.e. callNext), and it "lays out" how all these things relate to each other in the HTML. In summary, I use inverse extend to apply the "don't repeat yourself" rule to HTML. Just because it isn't real code doesn't mean it's a good idea to duplicate it all over the place :-D Heard about Common Lisp? What actually makes this different from Common Lisp's CLOS system? There you have :before, :after and :around methods. These are used to force som actions onto methods defined in subclasses. If I have misunderstood your article, I apoligize. If not, I think this is worth mentioning, to put it mildly. Regards. Heard about Common Lisp? By the way, the fact that this design pattern is actually a feature in Beta as well as possibly Common Lisp does not detract from its usefulness as a design pattern. In fact, the opposite is true. It's a way of organizing your code that is helpful for solving a certain class of problems. Sometimes your language supports this technique directly, sometimes it doesn't. Either way, you must use the technique delibrately. By writing a design pattern spec as I have done, we now have the freedom to call this technique by name. Newbies can hear the name and go look it up if they need to. Furthermore, the name transcends programming languages. I.e. a Java programmer and a Beta programmer can now more easily understand each other when suggesting this technique as a solution to a problem. I got the feeling when I read your post that you were saying, "Shoot, we've been doing that for years in Common Lisp!" Exactly ;) That's the point of a design pattern--to document and name techniques that experienced programmers have been doing for years. In fact, if no one had been using this trick, it would not qualify as a design pattern! Heard about Common Lisp? Yes, I know about Common Lisp ;) Do :before, :after, and :around allow you to march all the way down the inheritance hierarchy (i.e. for more than two levels) keeping the same method name? It sounds to me that maybe what you're suggesting is that each child class method can use :around to pass control to the parent class method, and have the parent class method call back to the child class method at the appropriate time. Is that right? Can you post an example using one of my test cases from above? More on Common Lisp An example/introduction to this feature can be found A Brief Guide to CLOS. And yes, you can combine this feature all the way, but no, the behaviour is probably not what you have in mind. Actually, if you have a class hierarchy, the most specific :before and :after methods are run ourtermost. Here is a simple example: (defclass food () ()) [defclass food nil nil [0 0 0] nil nil nil [] nil nil nil ...] (defmethod cook :BEFORE ((f food)) (print "A food is about to be cooked.")) (defmethod cook :AFTER ((f food)) (print "A food has been cooked.")) (defclass pie (food) ((filling :accessor pie-filling :initarg :filling :initform 'apple))) (defmethod cook ((p pie)) (print "Cooking a pie.") (oset p filling (list 'cooked (pie-filling p)))) (defmethod cook :BEFORE ((p pie)) (print "A pie is about to be cooked.")) (defmethod cook :AFTER ((p pie)) (print "A pie has been cooked.")) (defclass lutefisk-pie (pie) ()) (defmethod cook :BEFORE ((p lutefisk-pie)) (print "Lutefisk in oven")) (defmethod cook :AFTER ((p lutefisk-pie)) (print "Lutefisk ready for you.")) (setq lute-1 (make-instance 'lutefisk-pie :filling 'stockfish)) (cook lute-1) "Lutefisk in oven" "A pie is about to be cooked." "A food is about to be cooked." "Cooking a pie." "A food has been cooked." "A pie has been cooked." "Lutefisk ready for you." (cooked stockfish) -------------- So, I was a little bit to quick. This is not what you want with your decoration example. However, you can use the meta-object protocol (MOP) to define other kind of behaviour. Thanks for the article anyway. Recards, Jon More on Common Lisp As far as I understand, you can actually quite easy change this behaviour through define-method-combination calls. Common lisp is extremely powerful for these kind of things. And, if I forgot, yes, you are right. My comment was not an attempt to disregard your main ideas. (However, one should expect some reference to languages that actually do implement this.) Merry Christmas. :-) -- Jon thanks! Wow! Thanks for your helpful comments! Although, as you said, you were too quick the first time, I'm confident that you could do this with the CLOS. This only serves to humorously prove my theory that no matter what you write about, someone somewhere has already done it in Lisp or it already exists as a feature in Common Lisp. I've joked that you could make a living writing about features that already in exist in Common Lisp, reimplementing them in languages like Python and Ruby. ;) At least this time, it wasn't *an obvious ripoff* in that Common Lisp doesn't already have a special form for it (that we know of) ;) The Beta programming language Doug Landauer was helpful enough to point out that this design pattern was actually implemented originally as a feature of the Beta programming language:
http://www.linuxjournal.com/article/8747?quicktabs_1=0
CC-MAIN-2015-22
refinedweb
7,477
66.03
Selfie cheap. OK, let's put this to work as something other than a vanity clicker! There are no instructions which come with this, it's delivered in a little plastic bag and that's it. Time to get hacking! Aim Once paired to a server, like the Raspberry Pi, pressing the button should run a program to turn on my Lifx bulbs. Cracking It Open With the battery panel slipped off and the cell removed, it's fairly easy to open the case. Fingernails are sufficient - no screws or glue! It's an AIROHA AB1126A. AB1126A is an optimized single-chip solution which integrates baseband and radio for wireless human input device applications especially for remote smartphone camera control. It complies with Bluetooth system version 3.0. But what happens when we ZOOM! ENHANCE!? The 24C16N is a fairly generic EEPROM. But what's this?!?! The chip is listed as an AB1127A. A chip which, seemingly, doesn't exit. Onwards! Getting Started When switched into the "on" position, the dongle is ready to pair. From the Ubuntu command line: $ hcitool scan Scanning ... 80:00:00:00:EE:E0 AB Shutter 3 Aha! We've found it. What sort of device is it? $ hcitool inq Inquiring ... 80:00:00:00:EE:E0 clock offset: 0x0acd class: 0x002540 It shows up a a keyboard. Let's connect to it and trust it. $ bluez-simple-agent hci0 80:00:00:00:EE:E0 Release New device (/org/bluez/794/hci0/dev_80_00_00_00_EE_E0) $ bluez-test-device trusted 80:00:00:00:EE:E0 $ bluez-test-input connect 80:00:00:00:EE:E0 To check that it is seen and connected properly: $ xinput ↳ AB Shutter 3 id=13 [slave keyboard (3)] Nice! Running xinput query-state "AB Shutter 3" allows us to see which keyboard keys are activated when the buttons are pressed. It turns out that the iOS buttons sends Volume Up (key 123) whereas the Android button sends Enter (key 36). It works! Sorta... Pressing the selfie-button instantly sends the command to my computer! Well... until the button goes to sleep. The device is powered by a CR2032 battery which, despite the power efficiencies of Bluetooth, isn't magical. After a few minutes of idleness, the device goes to sleep. Pressing any button wakes it up and repairs the connection - but then another button press is required to send a key press. The pairing process only takes a couple of seconds, so it's not quite instant. Make it do something useful Having an external button which can increase the volume or send an enter command isn't very useful. I want to press the button and have a program run which will (for example) turn on my lights. Run a program when the Bluetooth connection is made Because the device goes to sleep after a few minutes of inactivity, we need a way to listen for a connection. So, when a button is pressed for the first time, the device connects and a program is run. I've half-inched the instructions from this InOut Board tutorial. First of all, make sure Python has the ability to work with Bluetooth: sudo apt-get install python-bluez #!/usr/bin/python import bluetooth import time while True: print "Checking " + time.strftime("%a, %d %b %Y %H:%M:%S", time.gmtime()) result = bluetooth.lookup_name('80:00:00:00:EE:E0', timeout=5) if (result != None): print "Device detected" # Do Something else: print "Device NOT detected" time.sleep(6) With that running constantly in the background, you can perform an action whenever the device connects. Run a program when a button is pressed Right, this is where it gets tricky! Ubuntu doesn't seem to differentiate between different keyboards attached to a device. This means you can't use loadkeys to swap keys, nor xkb. You can, however, use xkbcomp to remap the buttons on a specific device (thanks to Stephen Wing for that tip). This will convert the Volume Up to XF86Launch1 and Enter to XF86Launch2 - those are multimedia keycodes which shouldn't be assigned to anything by default. remote_id=$( xinput list | sed -n 's/.*AB Shutter 3.*id=\([0-9]*\).*keyboard.*/\1/p' ) [ "$remote_id" ] || exit mkdir -p /tmp/xkb/symbols cat >/tmp/xkb/symbols/custom <<\EOF xkb_symbols "remote" { key { [ XF86Launch1 ] }; key { [ XF86Launch2 ] }; }; EOF setxkbmap -device $remote_id -print | sed 's/\(xkb_symbols.*\)"/\1+custom(remote)"/' | xkbcomp -I/tmp/xkb -i $remote_id -synch - $DISPLAY 2>/dev/null The script needs to be re-run every time the Bluetooth connection is re-established. Probably best to run it on reconnect as part of the Python code above. So, that remaps the inputs for that Bluetooth button. Ok, but how do we get XF86Launch1 to launch a program? It's pretty easy to set keyboard shortcuts in the GUI- but how do we do it on the command line? Well, you can't. There's no way to tell a shell to run a program when a specific key has been pressed. So, it's back to Python and listening for the key to be pressed. Which I have no idea how to do! If you know how to detect multimedia keys, please leave a comment or answer this StackOverflow question. Or - let me know a better, more obvious way that I'm missing! BlueTooth buttons are available on AliExpress and Amazon for UK customers. 40 thoughts on “Cheap BlueTooth Buttons and Linux” You should be able to read the key press events from one of the /dev/input/eventX files. Try running evtest The /dev/input interface should let you grab the events before they get seen by X11 / xkb so it would work in a headless setup as well. Aha! Thanks for that. Using this Python script I can grab the events directly. Only problem is that the keypresses (Vol+ & enter) are still sent through to the underlying system - and I need to find which /dev/input/eventI need. Onwards! The python evdev module can find the device and print the button events: $ sudo ./getButton.py device /dev/input/event17, name "AB Shutter 3", phys "00:1b:10:00:2a:ec" key event at 1456268812.859839, 115 (KEY_VOLUMEUP), down key event at 1456268812.860800, 115 (KEY_VOLUMEUP), up key event at 1456268813.736843, 28 (KEY_ENTER), down key event at 1456268813.909845, 28 (KEY_ENTER), up The grab() call should prevent the key presses from being seen by anything else Nice--works great. The only hurdle is when the AB Shutter 3 goes to sleep after 5 minutes...maybe a simple try/catch. Why not use `xbindkeys`? In my laptop I glue some scripts to various buttons with this `.xbindkeysrc` file in my home directory: "$HOME/bin/do_suspend.sh" m:0x0 + c:150 XF86Sleep "$HOME/bin/do_toggle_touchpad.sh" m:0x0 + c:199 XF86TouchpadToggle "$HOME/bin/do_autoselect_monitor.sh" m:0x0 + c:235 XF86Display "wicd-cli --wireless --disconnect && sudo rfkill block 1" m:0x0 + c:246 XF86WLAN Instead of having a python script that runs in the background to trigger events, you could use udev, which is part of the Linux kernel. Here's an example of using udev to trigger events when a bluetooth keyboard connects to a system: Thanks Dave, I'll check that out. Following this with interest. Do you have several of these buttons, from the same source, just the one, or several from different makes? I've been thinking on similar lines, per twitter, and wonder about changing the device firmware to choose a different key. Reason for this is I want several buttons to each have different functions. Another advantage is it could work on PC and Mac easily. Your way is very neat for Pi control and way easier to repeat under Linux, so I will reproduce it for my boiler automation as a boost button. Will order 3 from the same vendor and see if I can also get at the firmware via those UART tags. Having control over either side of the mapping would be superb. I've only bought the one. My concern is that if they all identify as the same model, it might be hard to tell them apart in software. Interested to see how you get around it. Please, PLease DO NOT REMOVE THE HARDWIRED SAFETIES ON THAT BOILER PLEASE. We had a wonderful brilliant software-oriented college-graduate do the control system update design for boiler controls (Where - don't ask). He "ripped out all that electrical stuff". A "failure" stopped his computer and six each six-inch valves on a full-pressure gas line opened up into one hot boiler and five cold ones with a common chimney. The technician in the building knew he could not run away fast enough to escape the blast, so he stayed and manually killed the gas. Do not remove or bypass the hardwired safeties on any fueled device - you could do jail time. Please withhold my email address for obvious reasons. I 100% agree with you, and a great safety warning. Glad that situation did not end as badly as it could have. To be specific about the scope of my own tinkering, my combi boiler has a dedicated timer control interface, which is accessible to end users. It is of the 'zero volts' type, meaning that you are operating a simple switch, closing a circuit that starts and ends at the main control board, which I DO NOT MESS WITH. I already had an aftermarket timer switch, more sophisticated than the onboard timer, but still very closed. I have this on now, but have tested previously replacing it with a standalone Pi, implementing a simple timing schedule. I am preparing a nicer, web controllable version to run on the pi permanently. I am interested in using the bluetooth buttons to give it a 'just boost now' option which my wife, who is competent but disinterested in IT, a tangible improvement over our current system, and to remove the need for her to access the web page. I plan 4 vital safety features, even given that I am using a purpose built timer control interface: 1) The interface will be one designed for switching mains from a Pi/Arduino, with proper separation of the high and low voltage sections, even though it doesn't need to be, for a few pounds extra it gave me a much more robust switch. The relay fails open, so the boiler turns off if it dies. 2) The Pi does not get to control the boiler direct. The switch signal has to be passed via an Arduino, whose only role in life is to sit in between and enforce a policy that the boiler cannot be switched on or off more than once per minute, or 10 times in 1 hour. If the Pi asks for more frequent changes, it reschedules if within 10 seconds, or sends back an error. This is because boilers are not designed to be turned on and off like a PWM duty cycle, and could be damaged if they spent all day turning on and off. The Arduino also sends a signal every hour to the Pi to say it is alive and still running its program, because I am anal about debugging in any live system. 3) The Pi only accepts connections from within my home network AND there will be a PIN page before I put it 'live'. 4) The Pi, Arduino and relay are in a sensible electronics enclosure, with good cabling, glands and cable strain relief, for the control wire, 5VDC in and wired ethernet. There is no onboard wifi, and the mains power point is some distance from the Pi. This is the result of my own risk assessment and mitigation thoughts, and doesn't constitute a recipe for anyone seeking to do the same thing - may be worth considering the above, but research the risks yourself. Is there anything more I could/should think about, in terms of potential risks and mitigations? I've ordered 3, this way I can test concurrent use, and afford to accidentally bork one. How did you do with 3 of them? I ordered 6 and would like to use all 6 for competitions time measurement. Did you success with three buttons associated on one peer? Someone has a complete example how to remap ABShuter ? I want to move UP and DOWN a pdf file. Thanks. Hi Daniel, did you ever find a solution for your UP and DOWN on PDF files. It is exactly what I am looking for as well. Hi Terence, thanks for this page - I found a selfie button in my local PoundWorld store, looks exactly like the one you have pictured (externally, at least), but I can't get the Pi to recognise it. Both the Pi and selfie button are working with other devices. The Pi dongle reports as " Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode)" in lsusb and this seems to be the "chipset of choice" on various forums. I'm persevering & will update you as I get results (or not). is there a way to adapt this for bluetooth 4.0 buttons? I picked up some cheap ones and I can connect to them, I just can't figure out how to capture the button press when I'm connected to the button The updates have reminded me I never received those buttons I bought... gaah. Time to chase the Amazon seller... Hello, do you know a way to avoid the button going to sleep? I would like to put them in my children bags so that the RPi detects them when they enter the house. But as the button is sleeping, cannot work... Thanks. No, they are designed to go to sleep fairly quickly to save battery power. I suggest you buy a dedicated beacon. See my review of the Chipolo. I would like to use it to trigger 2 actions in my Android app. Did anyone try to use it with Android? I made a quick and dirty script, an udev rule and systemd service to control my lifx light with one of these cheap remote. You can find all of this on It may be useful for somebody. Thanks for your hints. I implemented remapping via udev, this will works whitout an xserver. You can find my notes and rules here: Hello, I've recently built an Amazon Dash doorbell, which is basically an Tide Amazon dash which I connected to my home WIFI... I have a Pogoplug running Debian Linux, and I used an ARP probe python script which I placed in my cron jobs (it just run in a loop waiting for the Dash to connect to the WIFI) like this script:) once the Dash is pressed, it connects to the WIFI and disconnects, of which my Pogoplug detects the MAC address and executes a bash script... The Bash script basically just emails a screen grab from my CCTV using RTSP, sends the JPG to Flickr, and posts it to my facebook page... I also attached a USB speaker to my Pogoplug, and it then plays a WAV file (DING-DONG.WAV) which notifies people at home that there is someone at the door... I planned on using a bluetooth button instead as the Dash is limited to 1000 presses before it dies and the battery is hardwired... I was thinking that all I need is to figure out the button press from Bluetooth, but as you say, the bluetooth button goes to sleep mode, and wakes up on first press? Yes, they go to sleep after some time and then wake up on a key press. I f you don't care which button is pressed, you could do something similar to what you did with the dash: detect when a Bluetooth device with the correct identifier is paired with you. udev may help with that, maybe, I don't know. That's what I'm thinking I will do... they sell bluetooth buttons for $3 at the a dollar store here.... will purchase some and see if I can get it to work with my Pogoplug... I was thinking of eventually shifting my setup to my Raspberry Pi, but since I'm using the Pogoplug as a webcam server and it has lots of USB ports, I think, I'd stick with it.. Can I use the bluetooth shutter button to connect to my Arduino and act as a bluetooth receiver to make a LED on/off from my phone? Instead of getting a bluetooth module for my arduino Probably. Try it and let us know! And how about several buttons connected to one server service at a time? Is that possible? It should work. Please let us know what you build. Hey, thank you very much for this post! Perhaps somebody is helped with my solution for my headless raspberry pi3: An udev rule which matches the MAC Address of the button, could be one for different buttons. #/etc/udev/rules.d/98-wolbutton.rules ATTRS{phys}=="b8:27:eb:a0:40:b7", SYMLINK+="wolpc", ACTION=="add", TAG+="systemd", ENV{SYSTEMD_WANTS}="wolbuttonpc.service" This rule triggers a systemd service which starts a python script /etc/systemd/system/wolbuttonpc.service [Unit] Description=WOL Button fuer PC [Service] Type=simple ExecStart=/usr/local/bin/wolbutton.pc.py Restart=no /usr/local/bin/wolbutton.pc.py The python script, slightly modified from Jon Burgess, sends a wol packet at first run and then every time the button goes up (holding the button emits a lot of button downs for me) #!/usr/bin/env python import sys import evdev from wakeonlan import wol import syslog devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()] if len(devices) == 0: print "No devices found, try running with sudo" sys.exit(1) for device in devices: if device.phys == "b8:27:eb:a0:40:b7": device.grab() wol.send_magic_packet("AC:22:0B:C5:5C:CF") for event in device.read_loop(): #event at 1492803542.433958, code 115, type 01, val 00 if event.code == 115 and event.type == 01 and event.value == 00: syslog.syslog('WOL gesendet') wol.send_magic_packet("AC:22:0B:C5:5C:CF") Ruby service for this button Bluebutton Simple daemon that allows you to execute action when bluetooth button shutter pressed. So you can control your PC by low energy button device and few scripts. Hi, not sure I should post to this old thread... I bought a "Bluetooth Remote Shutter". I'm able to connect and trust it from an RPI running Jessie Stretch. I can see button pushes using the getButton.py script! But, there are many events that occur with each button push. For example when I push what is probably "KEY_3" I see this: key event at 1534185281.995662, 4 (KEY_3), up key event at 1534185281.995662, 193 (KEY_F23), up key event at 1534185281.995662, 194 (KEY_F24), up key event at 1534185281.995662, 184 (KEY_F14), up key event at 1534185281.995719, 189 (KEY_F19), up key event at 1534185281.995719, 190 (KEY_F20), up key event at 1534185281.995719, 191 (KEY_F21), up key event at 1534185281.995719, 192 (KEY_F22), up key event at 1534185281.995740, 185 (KEY_F15), up key event at 1534185281.995740, 186 (KEY_F16), up key event at 1534185281.995740, 187 (KEY_F17), up key event at 1534185281.995740, 188 (KEY_F18), up So, I wonder if the device is actually sending all those keys or whether something in the rpi is running a "macro" on receipt of certain keyboard input. Where/how would I check for this? Regardless, I figure I could ignore the extra "keystrokes" and use the first one in each burst. But, I'd like to understand what's goin on. Thanks! sudo nano /etc/udev/rules.d/70-divoom.rules ACTION=="add", SUBSYSTEMS=="input", ATTRS{name}=="11:75:58:C5:2D:6F", OWNER="username", SYMLINK+="divoom", TAG+="systemd" #, ENV{SYSTEMD_WANTS}="divoom.service" sudo nano /etc/systemd/system/divoom.service [Unit] Description=Divoom smartlight play button listener [Service] Type=simple User=yourusername ExecStart=/home/yourusername/project.sh sudo nano project.sh #!/bin/bash -eu cd /home/username/projectname/ evtest --grab /dev/divoom |grep --line-buffered 'value 1$' | while read line ; do espeak "yeah this works" done For some reason this tutorial is not working with some Bluetooth Selfie buttons, the only one I was able to make it work with was this one. Nothing else seems to work, any Idea on how to find a new solution? Hi - this blog post is 4 years old, so it won't be up-to-date. If you can explain what problems you're having, perhaps someone can offer help. Sure and sorry to bother with I know this is an old post but a really useful, I'm building two devices that when I press the button will alert the police and I noticed most new bluetooth Selfie buttons are not working with any of these techniques, the only one working is the one from the Amazon link also I noticed when using this command sudo showkey nothing happen when I press the already paired AB Shutter3 but that is not the case with the one from the Amazon link, I know the Post is a few years old but That tag actually has 5 Buttons according to the reference Schematic i found, not sure if they are programmed, also includes pinout for the serial programmer.- The chip also comes in a QFN40 fashion
https://shkspr.mobi/blog/2016/02/cheap-bluetooth-buttons-and-linux/?utm_source=pocket_mylist
CC-MAIN-2021-49
refinedweb
3,583
73.37
Hi there! Also a passion for SAPUI5 and Google Firebase? Then you just hit the right blog! Back then at university I worked a lot with Google Firebase to Create Web and Native applications. Since two years I’m spending my time working with SAP-Services. Especially with the SAPUI5 Framework in the SAP Full-Stack WebIDE. There is no way I would switch the SAPUI5 Framework or the SAP Full-Stack WebIDE back for another framework or editor. Cause they rock! But like we all know everything keeps evolving… so I tried to stay tuned and I started wondering… Why not bringing 2 of my favorites together? Well this is is exactly what I did! Here you can find some other blogs where I used Firebase services in SAPUI5 Applications: SAPUI5 FIREBASE BLOG SERIES: But let’s continue with this one for the moment. Using the Firebase Cloud Firestore Database as back-end together with the SAPUI5 Framework to create a Realtime Web Application. So no polling data with setInterval functions. But real time data provided by the back-end. In this blog I will show you how to implement Firebase into your UI5 Application. By creating, updating and deleting data in the back-end which will trigger automatic data updates in your application. Let’s get started! 1. Initialize and Setup your Firebase Project The first thing you want and need to do, is create a Firebase project. This can be done in the Firebase Console. More information about Firebase. 1.1 Here is on how to get started on with Firebase on the web. 1.2 Here is on how to get started with Cloud Firestore on the web. 1.3 Cloud Firestore Security-Rules Use the following security-rules only in a test-environment. We need these rules so we can consume our data from the Firebase Cloud Firestore Database into our SAPUI5 Application. // Allow read/write access to all users under any conditions // Warning: **NEVER** use this rule set in production; it allows // anyone to overwrite your entire database. service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if true; } } } After the development we close our database read and write access for everyone except for our authenticated users. // Allow read/write access on all documents to any user signed in to the application service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if request.auth.uid != null; } } } Or we close the database for everyone. // Deny read/write access to all users under any conditions service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if false; } } } 1.4 Add some test data to the Cloud Firestore database So next we want to populate our database with a Collection of Shipments. Some Documents that represent real Shipments. With some Properties for a Shipment. Add as much Shipments (documents) as you like. 1.5 Comparison between OData and Firebase Cloud Firestore. 2. Initialize and Setup your SAPUI5 Application Let’s keep this rocking and jump into the SAPUI5 World! Of course we will work in our beloved SAP Full-Stack WebIDE. 2.1 Select a template to start from For this demo app we will start from a UI5 template application. 2.2 Add the core Firebase JS SDK We add the the core Firebase JS SDK to our index.html file. This along with our Cloud Firestore JS SDK. Add the following Firebase imports after the first script-tag. <!-- The core Firebase JS SDK is always required and must be listed first --> <script src=""></script> <script src=""></script> Your index.html file should look like this. 2.2 Create a Firebase.js file Next we create a Firebase.js file at the root-level of our webapp folder. In here we will store our Firebase-config and we will initialize Firebase for our app. Finally we expose our Firebase logic in the form of a JSON-model. As you can see the Firebase-config gathered from the Firebase-console, is pasted inside our Firebase.js file. We create our own initializeFirebase function which will initialize Firebase, along with Cloud Firestore in the form of a JSON-model. sap.ui.define([ "sap/ui/model/json/JSONModel", ], function (JSONModel) { "use strict"; // Firebase-config retrieved from the Firebase-console const firebaseConfig = { apiKey: "YOUR-API-KEY", authDomain: "YOUR-AUTH-DOMAIN", databaseURL: "YOUR-DATABASE-URL, projectId: "YOUR-PROJECT-ID", storageBucket: "YOUR-STORAGE-BUCKET, messagingSenderId: "YOUR-MESSAGE-SENDER-ID", appId: "YOUR-APP-ID" }; return { initializeFirebase: function () { // Initialize Firebase with the Firebase-config firebase.initializeApp(firebaseConfig); // Create a Firestore reference const firestore = firebase.firestore(); // Firebase services object const oFirebase = { firestore: firestore }; // Create a Firebase model out of the oFirebase service object which contains all required Firebase services var fbModel = new JSONModel(oFirebase); // Return the Firebase Model return fbModel; } }; }); 2.3 Call and Initialize Firebase in the Component.js file and set it as a model Import the Firebase.js file in the sap.ui.define method. Call the initialize function and set it as a model with the name ‘firebase’. From now on our Firebase model is set and ready to be used. Your Component.js file should look like this. sap.ui.define([ "sap/ui/core/UIComponent", "sap/ui/Device", "sap/firebase/SAP-Firebase-Connect/model/models", "./Firebase" ], function (UIComponent, Device, models, Firebase) { "use strict"; return UIComponent.extend("sap.firebase.SAP-Firebase-Connect"); // Import Firebase in the sap.ui.define // set the firebase model by calling the initializeFirebase function in the Firebase.js file this.setModel(Firebase.initializeFirebase(), "firebase"); } }); }); 2.4 Prepare your Main.view.xml file In our view we want to show our shipments: shipment id, origin, destination and status.This in a table with column list items. But let’s do our-self a favor and remind our-self about the Firebase security-rules. We don’t want to forget to close the public read and write access at the end of our demo test phase. Here fore we create a little message strip that will remind us to open and close the security rules for read and write access. <MessageStrip text="{i18n>fbSecurityRules}" type="Warning" showIcon="true" showCloseButton="true" class="sapUiMediumMarginBottom"/> So now that we created our own personal reminder message strip we can add our table to the view. We bind our root path of shipments to the items aggregation of our table. This way we can bind our shipment properties in the respective column list item. We bind our title and column name to the correct i18n keys and last but not least the values into the Text controls. <Table id="shipmentTable" items="{/shipments}"> <headerToolbar> <Toolbar> <content> <Title text="{i18n>Shipments}" level="H2"/> </content> </Toolbar> </headerToolbar> <columns> <Column> <Text text="{i18n>ShipmentId}"/> </Column> <Column> <Text text="{i18n>Origin}"/> </Column> <Column> <Text text="{i18n>Destination}"/> </Column> <Column> <Text text="{i18n>Status}"/> </Column> </columns> <items> <ColumnListItem> <cells> <Text text="{code}"/> <Text text="{origin}"/> <Text text="{destination}"/> <ObjectStatus text="{status}" state="{= ${status} === 'Shipped' ? 'Success' : ${status} === 'Missing' ? 'Warning' : ${status} === 'Preparing' ? 'Information' : 'Error'}" icon="{= ${status} === 'Shipped' ? 'sap-icon://accept' : ${status} === 'Missing' ? 'sap-icon://status-critical' : ${status} === 'Preparing' ? 'sap-icon://begin' : 'sap-icon://status-negative'}"/> </cells> </ColumnListItem> </items> </Table> This way our table will be populated once we receive the data from the Cloud Firestore. We can add some nice UI5 features to our table, by using the ObjectStatus control. This way we can show the shipment status with a color (state) and a symbol (icon). But how to achieve this? Please not through a formatter, a formatter in here would be (in my opinion) an unnecessary implementation. So let’s use Expression Binding. More information about Expression Binding at SAPUI5 Demo Kit. How to read this state expression binding for the state (color) property of the Objectstatus? If the received value: ${status} If Status equals Shipped then Success Else if Status equals Missing then Warning Else if Status equals Preparing then Information Else Error state="{= ${status} === 'Shipped' ? 'Success' : ${status} === 'Missing' ? 'Warning' : ${status} === 'Preparing' ? 'Information' : 'Error'}" Same logic is applied on the icon of the Objectstatus, with icon values of course. We now finished the adjustments and creation of the Main.view.xml file. 2.5 Populate your i18n file with the key-values You can copy paste the following key-values for your i18n model. We used this in our text binding in the Main.view.xml. title=SAP-Firebase Connect App appTitle=SAP-Firebase-Connect appDescription=App Description fbSecurityRules=Don't forget to turn on and off the read permission again in the Firebase Firestore security rules. Shipments=Shipments ShipmentId=Shipment id Origin=Origin Destination=Destination Status=Status 2.6 The Main.controller.js file. Now what will happen in the Main.controller.js file? - Get our Firebase model, with all the services we added to it in the Firebase.js file. - Create a Firestore reference. - Create a collection reference to the shipments collection. - Initialize an array for the shipments of the collection as an object. - Create and set the created object to the the shipmentModel and set it to the view. - Get single set of shipments. This will all be done in the onInit function of our Main.controller.js. Your onInit function should look like this. onInit: function () { // Get the Firebase Model const firebaseModel = this.getView().getModel("firebase"); // Create a Firestore reference const firestore = this.getView().getModel("firebase").getData().firestore; // Create a collection reference to the shipments collection const collRefShipments = firestore.collection("shipments"); // Initialize an array for the shipments of the collection as an object var oShipments = { shipments: [] }; // Create and set the created object to the the shipmentModel var shipmentModel = new JSONModel(oShipments); this.getView().setModel(shipmentModel); // Get single set of shipments once this.getShipments(collRefShipments); }, Add this point we still need to create our getShipments function. getShipments: function (collRefShipments) { collRefShipments.get().then( function (collection) { var shipmentModel = this.getView().getModel(); var shipmentData = shipmentModel.getData(); var shipments = collection.docs.map(function (docShipment) { return docShipment.data(); }); shipmentData.shipments = shipments; this.getView().byId("shipmentTable").getBinding("items").refresh(); }.bind(this)); } This function holds the (in the onInit created) shipments reference as parameter. On this collection we will perform a get call which will return all our shipments (Documents). Once the call is finished we received our shipments in the collection parameter. At this point we want to get our shipment model and retrieve the data from it. Next we map our collection (result) so we can retrieve the data from a shipment (Document). This with the data() function from Firebase. Once this is done we add our result (shipments) to our shipmentData and we refresh the binding of the items in our table. 2.7 Let’s run the application and check the result. When we launch the application we get the following result. Looks nice right? Our table populated with our data, just the way it should be. This using the UI5 Framework. Nothing special so far right? Let’s add the magic! 2.7 Add a real-time listener for add, modify and deletion operations in the shipment collection. We saw that the data is fetched once from our shipment collection and it is displayed in our table. What if, it is so important that the shipments need to be updated immediately? Well then we add the real-time listeners. The first thing we want to do is comment the getShipments function and add the getRealTimeShipments function in the onInit function of our controller. Again passing our shipment collection reference. // Get single set of shipments once //this.getShipments(collRefShipments); // Get realtime shipments this.getRealTimeShipments(collRefShipments); Next we want to implement our getRealTimeShipments function. This function looks like this, little busy function right? getRealTimeShipments: function (collRefShipments) { // The onSnapshot the keep the data up to date in case of added, // modified or removed data in the Firestor database collRefShipments.onSnapshot(function (snapshot) { // Get the shipment model var shipmentModel = this.getView().getModel(); // Get all the shipments var shipmentData = shipmentModel.getData(); // Get the current added/modified/removed document (shipment) // of the collection (shipments) snapshot.docChanges().forEach(function (change) { // set id (to know which document is modifed and // replace it on change.Type == modified) // and data of firebase document var oShipment = change.doc.data(); oShipment.id = change.doc.id; // Added document (shipment) add to arrat if (change.type === "added") { shipmentData.shipments.push(oShipment); } // Modified document (find its index and change current doc // with the updated version) else if (change.type === "modified") { var index = shipmentData.shipments.map(function (shipment) { return shipment.id; }).indexOf(oShipment.id); shipmentData.shipments[index] = oShipment; } // Removed document (find index and remove it from the shipments array) else if (change.type === "removed") { var index = shipmentData.shipments.map(function (shipment) { return shipment.id; }).indexOf(oShipment.id); shipmentData.shipments.splice(index, 1); } }); //Refresh your model and the binding of the items in the table this.getView().getModel().refresh(true); this.getView().byId("shipmentTable").getBinding("items").refresh(); }.bind(this)); } Let’s split it up into pieces. 1. We want to listen for changes on all our documents in our collection shipments. We immediately bind ‘this’ to our function so we have a this reference later to work with. // The onSnapshot creates a listener to our collection in this case collRefShipments.onSnapshot(function (snapshot) { }.bind(this)); 2. In our onSnapshot function we get our shipmentModel and our shipments. // Get the shipment model var shipmentModel = this.getView().getModel(); // Get all the shipments var shipmentData = shipmentModel.getData(); 3. We do NOT want our full shipment collection again when changes occur. So we add a document listener that will return a specific document in the onSnapshot function. // Get the current added/modified/removed document (shipment) of the collection (shipments) snapshot.docChanges().forEach(function (change) { }); 4. When we first load our application, the shipments (Documents) id is set. By doing this, we later have an easy reference to our shipment (Document) when its changed. We can set it here since the application will pass through the added part on loading the data. // set id (to know which document is modifed and replace it on change.Type == modified) // and data of firebase document var oShipment = change.doc.data(); oShipment.id = change.doc.id; 5. When a shipment is added in the Cloud Firestore the type of the Document is added. So we push this document into our shipment array. // Added document (shipment) add to arrat if (change.type === "added") { shipmentData.shipments.push(oShipment); } 6. If a shipment (Document) is modified the we will look it up in our shipment array. This is where the earlier placed document id comes in handy. We search based on this id. The returned index is used to replace the old shipment with the new one. // Modified document (find its index and change current doc with the updated version) else if (change.type === "modified") { var index = shipmentData.shipments.map(function (shipment) { return shipment.id; }).indexOf(oShipment.id); shipmentData.shipments[index] = oShipment; } 7. The same logic is applied for the deletion of a shipment in the back-end. We look up the index of the deleted shipment and delete it from our array. This using splice and NOT the delete object from array. Using the delete object would clear the values from the object (list item in our table), but still an empty list item would be visible in our table. // Removed document (find index and remove it from the shipments array in the model) else if (change.type === "removed") { var index = shipmentData.shipments.map(function (shipment) { return shipment.id; }).indexOf(oShipment.id); shipmentData.shipments.splice(index, 1); } 8. As last we hard refresh our view’s model and we refresh the items binding of our table. This outside our docChanges function. //Refresh your model and the binding of the items in the table this.getView().getModel().refresh(true); this.getView().byId("shipmentTable").getBinding("items").refresh(); Alright! We finished the full real-time implementation of our controller. Let’s see this in action! 3. Demo action time! There will be 4 steps in this demo video. - Open the UI5 Applications. - Modify data in the back-end and see the UI5 App responding to it. - Add data in the back-end and see the UI5 App responding to it. - Delete data in the back-end and see the UI5 App responding to it. Awesome isn’t it? This with the SAPUI5 Framework! Love it! 4. Recap time, what did we learn? In this blog I went over a lot of functionalities of Both UI5 and Firebase. So here are some key takeaways: - The SAPUI5 Framework is still our way to go in developing apps. - Our preferred IDE is the SAP Full-Stack WebIDE with no doubt. - We can consume Google Firebase Services in our SAPUI5 Applications. - No setInterval javascript functions or other polling methods are used. - Use of Expression binding where possible over formatters. - Firebase security rules are important! Talking about security rules…. You closed your database again with the correct security rules? 😉 Both of my favorites work together in an awesome integrated way! What do you think about it? Worth to give it a try? Thanks for reading my blog about “Create SAPUI5 Applications with Google Firebase“. I hope you found it interesting! See you next time! Kind regards, Dries Thanks for sharing Dries Van Vaerenbergh Nice to have GIT project for SAPUI5 Application you developed Hi Dries, nice article that brings two of my favorite products together, even though I don't agree with you regarding your opinion about WebIDE. It's a great editor but depending on the requirements there are better ones. Thanks Helmut Hi Helmut, Thank you for your reaction and happy to hear you share the same interests! Regarding the WebIDE, there are indeed other good editors that serve the case depending the requirements like you said. Totally agree on that. Kind regards, Dries i have followed each steps and when i try to run the application it is showing that firebase js resource is not loading so can you just tell me what might be the issue and can you share the git hub link for this application. Hi Manohar, I was able to reproduce your error. I think your problem is situated in the Component.js file. I displayed my whole Component.js file in the blog including the following: This cannot be copy pasted fully. You have to adjust the following line: sap/firebase/SAP-Firebase-Connect/model/models your/namespace/your-app-name/model/models You can find yours easily in the manifest.json for example: (change the dots in the namespace to / in the component.js) I think this is the problem. Could you check this and let me know? Good luck! Kind regards, Dries Ps: Nice you are trying it out! Hi Dries Van Vaerenbergh, Thanks a ton for your reply application ran successfully the mistake was in that scaffolding i thought it was a standard library later i changed it to my application namespace anyways keep posting some new blogs. Hi Manohar, Awesome! New blogs coming soon. 🙂 Kind regards, Dries It's very nice to see UI5 and Firebase working together. Great post !! Congrats Dries Van Vaerenbergh ! Rodrigo Henrique de Oliveira Bisterço thank you a lot! Happy to hear you share the same thoughts ! I have always been working in SAP/ABAP world, and some time ago I have been spending 1 year developing SAPUI5 Apps and I did not know these kind of databases (we always use SAP as Backend and the SAP Gateway for Odata). Honestly, I loved what you showed us, and your explanation is very very clear and well explained. Thanks for let us know how to combine these two worlds Dries! Looking forward more Blogs from you!! Hi Daniel, It is really nice to hear that! Thank you a lot! Happy to hear i was able to explain it in a proper way and that you like the idea of combining them. More blogs out now and more will follow. Kind regards, Dries Brings back some nice memories - good to see UI5 & Firebase together again. after all these years 🙂 Hi DJ, Awesome to see how Firebase was used back then! ? Awesome to integrate Firebase and UI5. Kind regards, Dries Hi, could you please share your git hub link for this project, facing versioning issue and error while getting data in firestore Hi Kumararaja, Nice to hear you are trying it out yourself. You can find the GitHub Repository here: Best regards, Dries Hi, I am learning sapui5, and I want to ask, can we implement a login system for many users, but different roles, for example: admin and he has his own view, he can add users, etc. and users can request something, and the admin can approve it. If you have any documentation for that, for views changing for different roles, please share. Greetings.
https://blogs.sap.com/2019/06/03/create-sapui5-applications-with-google-firebase/
CC-MAIN-2021-17
refinedweb
3,462
59.4
21 April 2008 05:15 [Source: ICIS news] SINGAPORE (ICIS news)--Asian naphtha prices hit another record high with the second half of June contract trading at $956.50/tonne on strong gains in the crude markets, according to ICIS pricing data on Monday. The contract was traded late Friday on a CFR (cost and freight) basis ?xml:namespace> On 16 April, the contract had breached $942/tonne CFR Japan, when Shell sold to Trafigura and Sempra to BP. Glencore also sold the same contract at $940/tonne to Trafigura. The gains in the naphtha markets still lagged behind gains in the crude oil markets last week as naphtha rose by 3.27% as against a gain of 3.37% in the Brent crude markets. Crude crossed the $115/barrel mark last week and was testing the $117/barrel mark. The naphtha market remained weak due to lacklustre demand from northeast (NE) Asian end-users as most of them had covered their May nominations and were waiting for prices to soften before seeking for June cargoes. The inter-month backwardation in the Asian naphtha market continued to crunch due to weak demand from NE Asian end-users, industry sources said on Monday. The inter-month spreads between second half of May/second half of June was seen at $3/tonne on Thursday, against $5/tonne last week. It crunched to $1/tonne towards the end of last.
http://www.icis.com/Articles/2008/04/21/9117485/asia-naphtha-rises-to-956.50t-on-crude-gains.html
CC-MAIN-2013-20
refinedweb
237
71.34
Issue Type: Improvement Created: 2011-07-20T16:23:16.000+0000 Last Updated: 2012-02-23T16:39:05.000+0000 Status: Closed Fix version(s): Reporter: Artur Bodera (joust) Assignee: Ben Scholzen (dasprid) Tags: - Zend\Mvc\Router Related issues: Attachments: This is the current way of configuring a single route: <pre class="highlight"> 'foo' => array( 'route' => 'foo', 'type' => 'Zend\Controller\Router\Route\StaticRoute', 'defaults' => array( 'action' => 'index', ), ), Because 99% of people will only use the basic route types (Static, Regex, Hostname, etc.) there is no need to enforce supplying the whole class name. As 2.0 is 5.3+, then we already have namespaces for that and we can educate people to use short class names. My suggestion is to allow this: <pre class="highlight"> 'foo' => array( 'route' => 'foo', 'type' => 'StaticRoute', // class name in Zend\Controller\Router\Route namespace 'defaults' => array( 'action' => 'index', ), ), Posted by Artur Bodera (joust) on 2011-07-20T19:23:47.000+0000 Fixed. Waiting for pull Posted by Matthew Weier O'Phinney (matthew) on 2011-07-25T21:06:56.000+0000 Patch looks good. However, the router is being rewritten currently (see); and the current implementation (SimpleRouteStack) already provides broker capabilities that implement the feature you're requesting (short-name mappings). We can resolve this issue once that work is merged to master. Posted by Matthew Weier O'Phinney (matthew) on 2011-07-25T21:07:40.000+0000 Assigning to Ben to resolve once router refactoring is merged to master. Posted by Adam Lundrigan (adamlundrigan) on 2012-02-23T16:39:05.000+0000 The new router was merged a while back, and this functionality is included there.
https://framework.zend.com/issues/browse/ZF2-40
CC-MAIN-2018-05
refinedweb
271
56.76
Rebooting R-Pi causes relay to click on. Posted: Wed Sep 16, 2015 11:26 pm I have a raspberry pi that I've been using to turn on a AC Solid state relay for my desktop light. It was all working fine with no issues until I replaced it with a DC SSR. Now when I reset the raspberry pi, the relay will click on which causes the light to turn on any time the power is cut to the pi. I've read about this happening but I can't find where I read it anyway and was wondering if anyone had any input on this. The pins I'm using to output a signal to the relay are the 5v for power and pin 7(on the board, not BCM). Here is the code that gets executed to turn the light on: Code: Select all import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) GPIO.setup(7,GPIO.OUT) GPIO.output(7, not GPIO.input(7))
https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=120888&view=print
CC-MAIN-2020-24
refinedweb
169
79.9
Here is something for the monks to mull over on the weekend. It is not a perl specific thing but I fear that large amounts of perl ingenuity will be required to solve it. Q: What is the shortest string that contains all the numbers from 0000 to 9999. For example: '012345678' contains 0123, 1234, 2345, 3456, 4567, 5678. Is it possible to create a string which contains all the numbers and no duplicates? To date my best achievement is: Length is: '10427'. Missing count: '0' Duplicate count: '424' [download] Please feel free to use this function to test any strings produced: # Will return an analysis of the string. # Usage: print analyse( $string ); sub analyse { # Make an array from the number and create hash. my $number = shift; my @string = split //, $number; my %count = (); # Work over the number adding values to the hash. while (1) { my $candidate = $string[0].$string[1].$string[2].$string[3]; $count{$candidate}++; # Get rid of first element of the array. shift @string; # Break out at the end of array. last if scalar @string < 4; } # Tally up the number of duplicates and missing numbers. my $duplicates = 0; my $missing = 0; for ( 0 .. 9999 ) { my $k = 0 x ( 4 - length $_ ) . $_; my $v = $count{$k}; unless ( defined $v ) { $missing++; } $duplicates += ( $v - 1 ) if $v > 1; } # Create the summary. my $text; $text .= " Length is: " . length($number) . "\n"; $text .= " Missing count: " . $missing . "\n"; $text .= "Duplicate count: " . $duplicates . "\n\n"; return $text; } [download] My thoughts on this puzzle are that you can go about it one of two ways: add unused numbers to the end of the string (technique used for results above) or remove duplicates. Probably the best is a combination of both. Update: Made a change to the analysis code - pesky CVS caught me out. --tidiness is the memory loss of environmental mnemonics What is the shortest string that contains all the numbers from 0000 to 9999. Assuming the numbers must all have 4 digits, the shortest theoretical length is 10003 digits if you can get them to overlap perfectly (10000 unique numbers and 3 digits of overhead.) As it happens, you can do it perfectly. Here's one example (be careful of newlines if you copy and paste it): The (admittedly lousy) script I used follows. It's a bit c-ish... -sauoq "My two cents aren't worth a dime."; --tidiness is the memory loss of environmental mnemonics By the way, you can improve your analyze function quite a bit. This piece in particular stood out to me: for ( 0 .. 9999 ) { my $k = 0 x ( 4 - length $_ ) . $_; [download] for ( 0 .. 9999 ) { my $k = 0 x ( 4 - length $_ ) . $_; [download] sub analyze { my $number = shift; my $length = length $number; my $duplicates = 0; my %seen; while (length $number >= 4) { $seen{substr($number, -4)}++ and $duplicates++; substr($number, -1, 1, ''); } my $missing = 10000 - keys %seen; return ($length, $missing, $duplicates) } [download] Yours is slicker and a good demonstration of the substr function. Just to prove it here are the benchmarks: Rate EvdB saouq EvdB 7.89/s -- -67% saouq 23.8/s 202% -- [download] So far, it would appear that there are at least 10000 minimal solutions. 1 for every possible set of first 4 chars. There are also multiple--number yet to be determined, my poor ol' pII is groaning:)--variations of each of these starting points. There's probably some mathematical way of calculating the total number of unique minimum solutions, but I can't see it. tye?, tilly?, Abigail? the NSA?. Anyone? Wrong. By the way it's not that sure that you can start with every 4 digit number. I think you can - but you need to prove it. Update: Or is it right? Let's have two differend 4 digit strings - if they are different then at least on one of the 4 position they have different digits - the permutation will change those digits to different digits as well. It means the permutation induces a 1-1 function on the 10000 4 digit strings. There can't be any more of them so in the whole resulting string there will be exactly 10000 different 4 digit strings. QED That was long time since I proved a theorem. I meant unique in the 'these two string compare differently' sense, rather that the set theory sense. By the way it's not that sure that you can start with every 4 digit number. I think you can - but you need to prove it. You can. First, it is sufficient to prove that you can do it with every possible pattern of same/differing numbers. Permutations take care of the rest. (Your thinking on that was fine, by the way.) There are only twelve such patterns: AAAA AAAB AABA ABAA BAAA AABC ABAC ABCA BAAC BACA BCAA ABCD. All you have to do is choose A, B, C, and D as four different digits and find a solution for each one. I chose A=0, B=1, C=2, and D=3 and then, using the same code in my original solution, found that there was indeed a solution for each of the 12 patterns. So, for a lower bound, we have at least 12 * 10! (or the number of starting patterns multiplied by the permutations.) Some other interesting facts (proofs left as an exercise): Based on a naive statistical analysis, we might expect about 2.8e5659 unique shortest strings. I got this number by computing the odds of a string of 10003 digits being a solution and multiplying that by the number of 10003-digit strings. Of course, floating point won't handle that. I got around this by reducing the formula a bit and computing the log() of the value instead and dividing by log(10) to get about 5659.45 ( all from the "perl -de 0" prompt, just in case anyone thought this was off-topic q-: ). The odds of me having made a mistake in this calculation are rather large (it was a quick hack with no double checking -- both the math and the arithmatic). I'll try to find time later to post my analysis and code used to do the calculations. I'm also not convinced (having not spent much time thinking about it) that a naive statistical analysis would be very valid. My qualms here appear to be diminishing the more I think about it (in part because the answer is so *huge*), but I still think the worry is worth mentioning. It certainly was an interesting problem. (: Also note that sauoq has shown us the lexicographically first of the shortest strings. I also have an inkling that we can eliminate the backtracking in sauoq's program (or maybe not...). But I've got real work™ to do... ): I also have an inkling that we can eliminate the backtracking in sauoq's program (or maybe not...). Your intuition is right. My (much nicer) second try below generates a valid solution string without backtracking. Notice it isn't the same string as my original answer though. (Nor is it a permutation.) #!/usr/bin/perl -w use strict; my $string = '9999'; # Starting with nines simplifies the loop my %seen = ( $string => 1 ); my $seen = 1; while ($seen < 10000) { for (0 .. 9) { my $candidate = substr($string, -3) . $_; next if $seen{$candidate}; $string .= $_; $seen{$candidate} ++; $seen++; } } print $string, "\n"; [download] update:i fixed a typo in the name of the sequences, when BrowserUK alerted me of my misspelling .-enlil Q: What is the shortest string that contains all the numbers from 0000 to 9999. Probably not the shortest, but likely close: perl -MCompress::Bzip2 -e " print Compress::Bzip2::compress('all the numbers from 0000 to 9999')" [download] ----I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident. -- Schemer Note: All code is untested, unless otherwise stated Yes No A crypto-what? Results (166 votes), past polls
http://www.perlmonks.org/?node_id=260002
CC-MAIN-2014-10
refinedweb
1,325
72.97
genderize 0.0.1 genderize: ^0.0.1 copied to clipboard Genderize API wrapper for Dart, determinate the gender of a name. Use this package as a library Depend on it Run this command: With Dart: $ dart pub add genderize With Flutter: $ flutter pub pub add genderize This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get): dependencies: genderize: ^0.0.1 Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more. Import it Now in your Dart code, you can use: import 'package:genderize/genderize.dart';
https://pub.dev/packages/genderize/install
CC-MAIN-2021-17
refinedweb
108
65.83
24 August 2010 17:06 [Source: ICIS news] SINGAPORE (ICIS)--PARS Petrochemical has shut its 600,000 tonne/year styrene monomer (SM) plant at ?xml:namespace> “The plant is expected to be shut for around one month,” said a China-based SM trader. The SM facility was taken off line on 22 August after the supply of benzene from Borzouyeh Petrochemical’s 430,000 tonnes/year plant, also at Assaluyeh, was shut down due to production problems, sources added. Borzouyeh Petrochemical, which is part of the Iranian state-owned National Petrochemical Co (NPC), could be also diverting its aromatics supply for gasoline blending, as Sources at National Petrochemical Co were not available for comment. “Several traders in Asia with buy contracts from the company had received notice that supply for September cargoes would be cancelled,” said an SM producer in A broker said: “Traders affected by the supply cut could be sourcing for spot parcels to meet their contract obligations.” Spot SM prices rose by nearly $10/tonne from 23 August to Tuesday, reaching $1,130-1,135/tonne (€893-897/tonne) CFR (cost & freight) ($1 = €0.79) Mahua Chakravarty contributed to this article For more
http://www.icis.com/Articles/2010/08/24/9387919/PARS-Petrochemical-shuts-Iran-SM-plant-on-benzene-shortage.html
CC-MAIN-2014-41
refinedweb
196
54.15
From Bugzilla Helper: User-Agent: Mozilla/4.76 [en] (Win98; U) BuildID: NS6 Final It actually doesn't crash the browser, it just appears to. It took me about 30 minutes before ns6 could import a 250k bookmark file. During that time ns6 stoped responding. After the import ns6 also took about 10 minutes to load each time. Reproducible: Always Steps to Reproduce: 1.import a large bookmark file 2. 3. Actual Results: Browser stoped responding for about 30 minutes till bookmark import was complete. Then each time NS6 was restarted it took about 10 minutes to load. After the load if I clicked on my bookmarks there was another 20 minutes of wait time. Expected Results: Imported them a lot faster. I'm not going to use ns6 or mozilla till this bug is fixed. I like quick access to my bookmarks. This is a dupe of bug 52144. Reporter : This isn´t the Netscape 6 bug reporting database. If you have problems with Netscape 6 use this URL : If you can reproduce a bug on mozilla feel free to open a new bug after you haves searched at the mozilla bug databse (bugzilla) if this bug is already filled. If you want to report performance related bugs then please post your used system. (Pentium III 800 with 128Mb.....) Thanks *** This bug has been marked as a duplicate of 52144 *** Verified dupe.
https://bugzilla.mozilla.org/show_bug.cgi?id=60510
CC-MAIN-2016-50
refinedweb
233
76.32
Prototype JavaScript framework - blog tag:prototypejs.org,2009:mephisto/blog Mephisto Noh-Varr 2009-06-16T22:23:13Z Andrew tag:prototypejs.org,2009-06-16:25365 2009-06-16T22:21:00Z 2009-06-16T22:23:13Z Prototype 1.6.1 RC3: Chrome support and PDoc <p>Today we’re announcing Release Candidate 3 of Prototype 1.6.1. Among the highlights of this release are official Chrome support, improved IE8 compatibility, faster generation of API documentation with <a href="" title="PDoc">PDoc</a>, and lots of bug fixes.</p> <p>Today we’re announcing Release Candidate 3 of Prototype 1.6.1. Among the highlights of this release are official Chrome support, improved IE8 compatibility, faster generation of API documentation with <a href="" title="PDoc">PDoc</a>, and lots of bug fixes.</p> <h3>Chrome support</h3> <p>Since <a href="" title="Google Chrome - Download a new browser">Google Chrome</a> is a close sibling of Safari, Prototype has had excellent Chrome compatibility ever since the browser was first released. Now we’re making it official: Prototype supports Chrome 1.0 and greater.</p> <p>If you have Chrome installed on your system (Windows only for now, even though early alphas exist for Mac), invoking <code>rake test</code> will run the unit tests in all locally-installed browsers, including Chrome. To run the unit tests in Chrome alone, try <code>rake test BROWSERS=chrome</code>.</p> <h3>Generate your own docs with PDoc</h3> <p>It’s been a long, strange trip for <a href="" title="PDoc">PDoc</a>, the inline-doc tool that will soon be for Prototype and <a href="" title="script.aculo.us - web 2.0 javascript">script.aculo.us</a> what <a href="" title="RDoc - Document Generator for Ruby Source">RDoc</a> is for <a href="" title="Ruby on Rails">Rails</a>. It started as Tobie’s brainchild over a year ago, but key contributions from <a href="" title="James Coglan">James Coglan</a> and <a href="" title="samleb's Profile - GitHub">Samuel Lebeau</a> have helped to carry it across the finish line.</p> <p>PDoc was a part of RC2, but has since been updated to make doc generation <em>much, much</em> faster. On my machine, a process that used to take 20 minutes now takes only <em>60 seconds</em>. Furthermore, we’ve solved a couple of minor issues that made it hard to build the docs on Windows.</p> <p>Ever since Prototype 1.5, we’ve kept our documentation in <a href="" title="Mephisto—The best blogging system ever">Mephisto</a>, the same engine that powers the rest of the site (and this blog). It’s served us well, but it meant that updating the docs became a chore that could only be started once we’d released a particular version. PDoc will make it far easier to maintain our documentation — and far easier to keep archival copies of the docs for older versions of Prototype.</p> <p>Upon final release of 1.6.1, we’ll put the generated docs on this site, just like Rails hosts <a href="" title="Rails Framework Documentation">its most recent stable documentation</a>. Until then, you can generate your own local docs by checking out the full source and running <code>rake doc</code> from the command line.</p> <h3>Other improvements</h3> <p>There have also been a number of bugs fixed since RC2 — including a heinous bug relating to <code>Event#observe</code> — and a number of key optimizations. We’ve further improved IE8 compatibility, solving some edge-case issues that popped up since RC2. Credit goes to Juriy (kangax), our newest team member, for working tirelessly these last few months to make 1.6.1 faster and less reliant on browser sniffs.</p> <h3>Download, report bugs, and get help</h3> <ul> <li><a href="/assets/2009/6/16/prototype.js">Download Prototype 1.6.1 RC3</a></li> <li><a href="">Submit bug reports</a> to Lighthouse</li> <li><a href="/discuss">Get Prototype help</a> on the mailing list or <code>#prototype</code> IRC channel</li> <li><a href="">Interact with the Core Team</a> on the protoype-core mailing list</li> </ul> <p>Thanks to the many contributors who made this release possible!</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2009-03-27:25040 2009-03-27T23:33:00Z 2009-03-27T23:39:26Z Prototype 1.6.1 RC2: IE8 compatibility, Element storage, and bug fixes >This is the first public release of Prototype that is fully compatible — and fully <em>optimized for</em> — Internet Explorer 8’s “super-standards” mode. In particular, Prototype now takes advantage of IE8’s support of the <a href="" title="Selectors API">Selectors API</a> and its ability to extend the prototypes of DOM elements.</p> <h3>What’s new?</h3> <ul> <li><strong>Full compatibility with Internet Explorer 8</strong>. <a href="" title="perfection kills">Juriy</a> has spearheaded the effort to replace most of our IE “sniffs” into outright capability checks — making it far easier to support IE8 in both “super-standards” mode and compatibility mode.</li> <li><strong>Element storage</strong>, a feature <a href="" title="Prototype JavaScript framework: Pimp My Code #1: Element.Storage">announced previously</a>. Safely associate complex metadata with individual elements.</li> <li><strong><code>mouseenter</code> and <code>mouseleave</code></strong> events — simulating the IE-proprietary events that tend to be far more useful than <code>mouseover</code> and <code>mouseout</code>.</li> <li><strong>An <code>Element#clone</code> method</strong> for cloning DOM nodes in a way that lets you perform “cleanup” on the new copies.</li> </ul> <h3>What’s been improved?</h3> <ul> <li>Better housekeeping on event handlers in order to prevent memory leaks.</li> <li>Better performance in <code>Function#bind</code>, <code>Element#down</code>, and a number of other often-used methods.</li> <li>A number of bug fixes.</li> </ul> <p>Consult the <a href="" title="CHANGELOG at 6c38d842544159d2334f2252c9015c737d5046b0 from sstephenson's prototype - GitHub">CHANGELOG</a> for more details.</p> <p>In addition to the code itself, the 1.6.1 release features Prototype’s embrace of two other excellent projects we’ve been working on: <a href="" title="JavaScript dependency management and concatenation: Sprockets">Sprockets</a> (JavaScript concatenation) and <a href="" title="PDoc">PDoc</a> (inline documentation). Sprockets is now used to “build” Prototype into a single file for distribution. PDoc will be the way we document the framework from now on. The official API docs aren’t quite ready yet, but they’ll be ready for the final release of 1.6.1.</p> <h3>Download, Report Bugs, and Get Help</h3> <ul> <li><a href="/assets/2009/3/27/prototype.js">Download Prototype 1.6.1_rc2</a></li> <li><a href="">Submit bug reports</a> to Lighthouse</li> <li><a href="">Get Prototype help</a> on the rails-spinoffs mailing list or #prototype <span class="caps">IRC</span> channel</li> <li><a href="">Interact with the Core Team</a> on the prototype-core mailing list</li> </ul> <p>Thanks to the many contributors who made this release possible!</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2009-02-20:24708 2009-02-20T01:59:00Z 2009-02-20T02:01:06Z Sprockets: Beautiful and angular >There are many great ways to use Sprockets in your own projects. You can use it the way Prototype does — split up your JavaScript into small, maintainable files, then <a href="" title="src/dom.js at ab1313ea202e0d0bfb7cd0f563b035040710da9b from sstephenson's prototype - GitHub">create “meta-files”</a> that include the smaller files in a logical order. Prototype had previously been doing this with plain ERB; now we integrate Sprockets as a Git submodule and use it to build our distributable file.</p> <p>Sprockets can also be used to write JavaScript “plugins”: bundles of files that can easily be integrated into existing code. With Sprockets, <a href="" title="JavaScript dependency management and concatenation: Sprockets">you can formally declare</a> that <code>foo.js</code> depends on <code>thud.js</code>; when your files are concatenated into one output file, <code>thud.js</code> will be included first.</p> <p>In addition, <a href="" title="JavaScript dependency management and concatenation: Sprockets">Sprockets lets JavaScript files <em>provide</em> other assets</a> — HTML, CSS, images, and the like. At build time, those assets will be copied into the document root of your server (in a way that preserves the sub-structure of directories within). This allows the plugin to refer to those assets via absolute URLs, instead of having to ask you where they’re located.</p> <p>A few facts are worth special mention.</p> <ul> <li><strong>Sprockets does not require Prototype.</strong> Sprockets directives can be inserted into any arbitrary JavaScript file. You can use Sprockets in your build system no matter which JavaScript framework you prefer.</li> <li><strong>Sprockets does not require Rails.</strong> Sam has also written an excellent <code>sprockets-rails</code> plugin, one which deftly applies the conventions of Rails plugins to JavaScript. But he has also written a <a href="" title="ext/nph-sprockets.cgi at e0ddeaf4c2f1e9e175df6dc909afd78057326a42 from sstephenson's sprockets - GitHub">generic CGI wrapper around Sprockets</a> that is framework-agnostic. Or, instead, you can integrate Sprockets into your build cycle without bothering your server stack with the details. If you use Rake, you can do this with Ruby, as Prototype does; otherwise you can use the <code>sprocketize</code> binary from the command line.</li> <li><strong>Sprockets-enabled JavaScript files can work just fine without Sprockets.</strong> If your plugin has its own “build stage,” then the distributable JavaScript will include no Sprockets directives. On the other hand, if your plugin is small enough not to require this overhead, your distributable can be a short JS file that declares its external dependencies at the top. Because <code>require</code> directives are an extension of comment syntax, they won’t confuse a JS interpreter.</li> </ul> <p>In short, we’re excited about what Sprockets means for the Prototype ecosystem. If you maintain a Prototype add-on library, the <a href="" title="Prototype: Core | Google Groups">prototype-core mailing list</a> would love to help you make it Sprockets-aware.</p> <p>Now is the time on Sprockets when we dance.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2009-02-16:24535 2009-02-16T08:41:00Z 2009-04-08T14:48:20Z Pimp My Code #1: Element.Storage <p>Man, it's quiet around here. Interested in doing some pimpin'?</p> <p>Man, it's quiet around here. Interested in doing some pimpin'?</p> <p>WAIT! COME BACK.</p> <p><em>Code</em> pimping. You know? <a href="" title="Prototype JavaScript framework: Want your code 'pimped'?">The thing I'd discussed before</a>? Forgive my earlier informality. I see now how my words could have been confusing.</p> <p>The very first edition of <cite>Pimp My Code</cite> is special because the code we’ll be looking at <em>will be included in Prototype 1.6.1</em>. (It's a bit like if we were to Pimp [someone's] Ride™, then decide to keep the car for ourselves.) So this is more than just an academic exercise for us — the “pimped” result is now part of the Prototype source code.</p> <h3>The Original</h3> <p>The code in question, from Sébastien Grosjean (a.k.a. ZenCocoon), implements element “storage” — attaching of arbitrary data to DOM nodes in a safe and leak-free manner. Other frameworks have had this for a while; <a href="" title="jQuery: The Write Less, Do More, JavaScript Library">jQuery</a>’s <code>$.fn.data</code>, for instance, is used heavily by jQuery plugin authors <a href="" title="Internals/jQuery.data - jQuery JavaScript Library">to great effect</a>. But Seb’s is based on the similar Mootools API, which I’ve admired since <a href="" title="MooTools - What’s New in 1.2: Element Storage">it debuted in Mootools 1.2</a>.</p> <p>Here’s Seb’s code. It’s a long code block, since he’s been thoughtful enough to comment the hell out of it:</p> <p>The idea is this: instead of storing arbitrary objects as properties on DOM nodes, create <em>one</em> custom property on the DOM node: an index to a global hashtable. The value of that key in the table will itself be a collection of custom key/value pairs. On top of avoiding nasty IE memory leaks (circular references between DOM objects and JS objects), this has the benefit of encapsulating all of an element’s custom metadata into one place.</p> <p>Let’s make a first pass at this, line-by-line.</p> <h3>The Critique</h3> <pre><code class="">Object.extend(Prototype, {UID: 1});</code></pre> <p>Already we’ve gotten to something I’d change. Seb is using the <code>Prototype</code> namespace correctly here, in that he’s storing something that’s of concern only to the framework and should feel “private.” But my own preference is to move this property into the <code>Element.Storage</code> namespace. I am fickle and my mind is hard to read.</p> <pre><code class="javascript">Element.Storage = { get: function(uid) { return (this[uid] || (this[uid] = {})); }, init: function(item) { return (item.uid || (item.uid = Prototype.UID++)); } }</code></pre> <p>OK, another change jumps out at me. The <code>Element.Storage.init</code> method gets called in both <code>Element#store</code> and <code>Element#retrieve</code>; it handles the case where an element doesn’t have any existing metadata. It creates our custom property on the node and increments the counter.</p> <p>In other words, <code>store</code> and <code>retrieve</code> are the only two places where this method is needed, so I balk at making it public. My first instinct was to make it a private method inside a closure:</p> <pre><code class="javascript">(function() { function _init(item) { return (item.uid || (item.uid = Prototype.UID++)); } // ... rest of storage code })();</code></pre> <p>I started down this path but quickly stopped. Instead, we’re going to refactor this part so that the <code>init</code> case is handled without the need for a separate method. Let’s move on for now.</p> <pre><code class="javascript">Element.Methods.retrieve = function(element, property, dflt) { if (!(element = $(element))) return; if (element.uid == undefined) Element.Storage.init(element); var storage = Element.Storage.get(element.uid); var prop = storage[property]; if (dflt != undefined && prop == undefined) prop = storage[property] = dflt; return prop; };</code></pre> <p>A few things to mention here.</p> <ul> <li> <p>Variable naming is important. The ideal name for the third parameter of this function would be <code>default</code>, but that’s off-limits; <code>default</code> is a reserved word in JavaScript. Seb’s opted for <code>dflt</code> here, which is clear enough. I’d change it to <code>defaultValue</code> because I like vowels.</p> <p>As an aside: my first instinct was to remove the <code>defaultValue</code> thing altogether, because I was surprised by the way it behaved. I didn’t find it very intuitive to give <code>Element#retrieve</code> the capability to <em>store</em> properties as well. So I took it out.</p> <p>I changed my mind several minutes later, when I wrote some code that leveraged element metadata. I had assumed I wouldn’t need the “store a default value” feature often enough to warrant the surprising behavior, but I was <em>spectacularly wrong</em>. I put it back in. Consider that a lesson on how your API design needs to be grounded in use cases.</p> </li> <li> <p>The idiom in the first line is used throughout Prototype and script.aculo.us (and, in fact, should be used more consistently). It runs the argument through <code>$</code>, but also checks the return value to ensure we got back a DOM node and not <code>null</code> (as would happen if you passed a non-existent ID). An empty <code>return</code> is equivalent to <code>return undefined</code>, which (IMO) is an acceptable failure case. Bonus points, Seb!</p> </li> <li> <p><p>The custom property Seb’s been using is called <code>uid</code>. I’m going to change this to something that’s both (a) clearly private; (b) less likely to cause a naming collision. In keeping with existing Prototype convention, we’re going to call it <code>_prototypeUID</code>.</p> </p> </li> <li> <p>Here’s a nitpick: <code>if (element.uid == undefined)</code>. The comparison operator (<code>==</code>) isn’t very precise, so if you’re testing for <code>undefined</code>, you should use the identity operator (<code>===</code>). You could also use Prototype’s <code>Object.isUndefined</code>. In fact, I will.</p> <p>I have a prejudice against the <code>==</code> operator. Most of the time the semantics of <code>===</code> are closer to what you <em>mean</em>. But this has special significance with <code>undefined</code>, which one encounters often in JavaScript. As an example: when you’re trying to figure out if an optional parameter was passed into a function, you’re looking for <code>undefined</code>. Any other value, no matter how “falsy” it is, means the parameter <em>was</em> given; <code>undefined</code> means it <em>was not</em>.</p> <p>(Oh, by the way: I am aware of the code screenshot on our homepage that violates the advice I just gave.)</p> </li> <li> <p>There are other checks against <code>undefined</code> in this function. For consistency I’m going to change these to use <code>Object.isUndefined</code> as well. Also, the check for <code>dflt != undefined</code> is unnecessary: if that compound conditional passes, it means <code>retrieve</code> is going to return <code>undefined</code> anyway, so it doesn’t matter which of the two <code>undefined</code> values we return.</p> </li> </ul> <p>Man, I’m a bastard, aren’t I? Luckily, <code>Element#store</code> is similar enough that there’s no new feedback to be given here, so I’m done kvetching.</p> <p>Before we rewrite this code to reflect the changes I’ve suggested, we’re going to make a couple design decisions.</p> <h3>Feature Design</h3> <p>While I was deciding how to replace <code>Element.Storage.init</code>, I had an idea: rather than use ordinary <code>Object</code>s to store the data, we should be using Prototype’s <code>Hash</code>. In other words, we’ll create a global table of <code>Hash</code> objects, each one representing the custom key-value pairs for a specific element.</p> <p>This isn’t just a plumbing change; it’s quite useful to be able to deal with the custom properties in a group rather than just one-by-one. And since <code>Hash</code> mixes in <code>Enumerable</code>, interesting use cases emerge: e.g., looping through all properties and acting on those that begin with a certain “namespace.”</p> <p>So let’s envision a new method: <code>Element#getStorage</code>. Given an element, it will return the <code>Hash</code> object associated with that element. If there isn’t one, it can “initialize” the storage on that element, thus making <code>Element.Storage.init</code> unnecessary.</p> <p>This new method also establishes some elegant parallels: the <code>store</code> and <code>retrieve</code> methods are really just aliases for <code>set</code> and <code>get</code> on the hash itself. Actually, <code>retrieve</code> will be a bit more complicated because of the “default value” feature, but we’ll be able to condense <code>store</code> down to two lines.</p> <h3>The Rewrite</h3> <p>Enough blathering. Here’s the rewrite:</p> <pre><code class="javascript">Element.Storage = { UID: 1 };</code></pre> <p>As promised, I’ve moved the <code>UID</code> counter. The <code>Element.Storage</code> object also acts as our global hashtable, but all its keys will be numeric, so the <code>UID</code> property won’t get in anyone’s way.</p> <p><code>Element#getStorage</code> assumes the duties of <code>Element.Storage.get</code> and <code>Element.Storage.init</code>, thereby making them obsolete. We’ve removed them.</p> <pre><code class="javascript">Element.addMethods({ getStorage: function(element) { if (!(element = $(element))) return; if (Object.isUndefined(element._prototypeUID)) element._prototypeUID = Element.Storage.UID++; var uid = element._prototypeUID; if (!Element.Storage[uid]) Element.Storage[uid] = $H(); return Element.Storage[uid]; },</code></pre> <p>The new <code>getStorage</code> method checks for the presence of <code>_prototypeUID</code>. If it’s not there, it gets defined on the node.</p> <p>It then looks for the corresponding <code>Hash</code> object in <code>Element.Storage</code>, creating an empty <code>Hash</code> if there’s nothing there.</p> <p>As I said before, <code>Element#store</code> is much simpler now:</p> <pre><code class="javascript"> store: function(element, key, value) { if (!(element = $(element))) return; element.getStorage().set(key, value); return element; },</code></pre> <p>I thought about returning the stored value, to make it behave exactly like <code>Hash#set</code>, but some feedback from others suggested it was better to return the element itself for chaining purposes (as we do with many methods on <code>Element</code>).</p> <p>And <code>Element#retrieve</code> is nearly as simple:</p> <pre><code class="javascript"> retrieve: function(element, key, defaultValue) { if (!(element = $(element))) return; var hash = element.getStorage(), value = hash.get(key); if (Object.isUndefined(value)) { hash.set(key, defaultValue); value = defaultValue; } return value; } });</code></pre> <p>And we’re done.</p> <h3>Further refinements</h3> <p>In fact, we’re <em>not</em> done. This is roughly what the code looked like when I first checked in this feature, but some further improvements have been made.</p> <p>Since we’d been using a system similar to this to associate event handlers with nodes, we had to rewrite that code to use the new storage API. In doing so, we found that we needed to include <code>window</code> in our storage system, since it has events of its own. Rather than define a <code>_prototypeUID</code> property on the global object, we give <code>window</code> a UID of <code>0</code> and check for it specifically in <code>Element#getStorage</code>.</p> <p>Also, based on an excellent suggestion, we changed <code>Element#store</code> so that it could accept an object full of key/value pairs, much like <code>Hash#update</code>.</p> <h3>In Summation</h3> <p>I was happy to come across Sébastien's submission. It was the perfect length for a drive-by refactoring; it made sense as a standalone piece of code, without need for an accompanying screenshot or block of HTML; and it implemented a feature we'd already had on the 1.6.1 roadmap.</p> <p>You can <a href="" title="sstephenson's prototype at master - GitHub">get the bleeding-edge Prototype</a> if you want to try out the code we wrote. Or you can <a href="" title="gist: 53924 - GitHub">grab this gist</a> if you want to drop the new functionality in alongside 1.6.0.3.</p> <p>We're further grateful to Mootools for the API we're stealing. And to <a href="" title="Call Me Fishmeal.">Wil Shipley</a> for the recurring blog article series we're stealing.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Tobie tag:prototypejs.org,2008-10-17:22154 2008-10-17T07:32:00Z 2008-10-17T07:33:12Z Prototype Linkedin Group <p>When we first <a href="">launched</a> the <a href="">Linkedin Prototype Group</a>, we weren’t necessarily expecting it to be such a success–it’s over 800 members strong and counting.</p> <p>When we first <a href="">launched</a> the <a href="">Linkedin Prototype Group</a>, we weren’t necessarily expecting it to be such a success–it’s over 800 members strong and counting.</p> <p>Also, at the time, there wasn’t much you could do after having joined the group. This has changed with the recent introduction of discussions.</p> <p><a href="">One of the first posts</a> spurred some thoughts about the usefulness and goals of this Linkedin group especially given the high quality of our <a href="">new mailing list</a>. (And let me take the opportunity to sincerely thank <a href="">T.J. Crowder</a> for all the effort he’s put into it.)</p> <p>My initial reaction, based on <a href="">an early August thread</a> was to suggest keeping the development-orientated discussions in the mailing list, while expecting more career-orientated ones to take place in the Linkedin group.</p> <p>Of course, there’s no way we can nor should be controlling this, and in the end, you will be deciding what will happen where. So I suppose the only real <em>raison d’être</em> of this post is to advise you of this new feature and open up the debate.</p> <p>Thoughts ?</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2008-10-07:22119 2008-10-07T21:23:00Z 2008-10-07T21:24:52Z Want your code "pimped"? <p>We decided it’s finally time to implement an idea we had long ago.</p> <p>We decided it’s finally time to implement an idea we had long ago.</p> <p>I’m an avid reader of the blog of Wil Shipley, a man in the business of writing great apps for OS X. His running code improvment series, <a href="" title="Call Me Fishmeal.: I will insult your code!">Pimp My Code</a>,.</p> <p>So we’re going to do something similar on this blog. Do you have a piece of JavaScript you want refactored? Does it use Prototype? Do this:</p> <ol> <li>Sign up for a <a href="">GitHub</a> account if you don’t have one. It’s free and quick.</li> <li>Go to <a href="" title="Gist — GitHub">Gist</a>, GitHub’s pastebin app, and paste the code you want us to refactor. Mark it as “private” if you like.</li> <li><a href="">Message me on GitHub</a> with the URL to your code snippet. If necessary, explain a bit about what the code does (or should do), but don’t write an epistle or anything.</li> </ol> <p <a href="" title="The Daily WTF: Curious Perversions in Information Technology">DailyWTF</a>-style exercise.</p> <p.</p> <p>If that sounds useful to you, then step up! Give us code and ask that it be pimped!</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2008-10-07:22118 2008-10-07T17:39:00Z 2008-10-07T19:37:06Z Growing the community <p>Now that 1.6.0.3 is out, let’s talk about the Prototype community.</p> <p>Now that 1.6.0.3 is out, let’s talk about the Prototype community.</p> <p>A lot of people have been commenting on how quiet it’s been around here over the last few months. There are several reasons:</p> <ul> <li>We were quite busy with behind-the-scenes stuff. Moving to <a href="">GitHub</a> and <a href="">Lighthouse</a> was quite the task. As part of that migration we went through all the bugs on the old Rails Trac and were therefore left with a large backlog of bugs that we’d waited too long to address.</li> <li>We were quite busy with our day jobs. Only a couple of us are freelancers; the rest work full-time for software companies. And usually there are several people working on Prototype at any one time, but over the summer it’s rarely been more than one or two.</li> <li>In an effort to “catch up” with the accumulated tickets, we tried to stuff too much into a single bugfix release. We need to keep releases small and focused; trying to change too much at once tends to disorient us and our users. Once we realized we needed to scale back this release, it took a while to figure out which changes needed to stay and which needed to be reverted.</li> </ul> <p>These aren’t excuses; they’re just explanations. As a team, we agree that we’ve got to prevent such a long release gap from happening again, and to keep an eye out for warning signs like the ones listed above.</p> <p>This means, among other things, that we’re planning to move away from a “when it’s ready” release schedule. Instead, we’ll move toward one in which there are several releases per year; whatever <em>is</em> ready in time for a given release will go in, and whatever <em>is not</em> will have to wait. That applies to bug fixes and features alike. Eight months between releases just won’t work.</p> <h3>What you can do</h3> <p>Community outreach was one of the major goals of Prototype Developer Day. Many people are frustrated with the state of the Prototype community and would like to see some changes made. We’re in complete agreement.</p> <p>Ideally, as an open-source community grows, those who want to help out gravitate toward specific roles. Those who can grok the source code write patches; those who are good at diagnosing problems file bug reports; those who can write clearly contribute documentation; and so on. We’d love to grow that “halo” around Prototype Core so that things can get done more quickly.</p> <p>To be more specific, we would love help in any of these areas:</p> <ol> <li>Give support on the <a href="">Prototype & scrip.aculous mailing list</a>.</li> <li><a href="">File bugs in Lighthouse</a> when you encounter errors or surprising behavior in Prototype.</li> <li>Write test cases or patches for <a href="">existing bugs in Lighthouse</a>.</li> <li>Discuss the direction of the library and its future on the <a href="">Prototype Core mailing list</a>.</li> <li>Propose new features and implement them.</li> <li>Write documentation wherever you feel we need more; <a href="">submit it to Lighthouse</a> as an enhancement.</li> <li>Suggest blog posts. (Or even write them!) <a href="">Post to the Prototype Core list</a> if you’re interested in doing this.</li> </ol> <p>There are, of course, many other things one can do to help us out. But if you’re looking for a way to contribute and don’t have something specific in mind, we’d suggest doing one of these seven things.</p> <h3>What we can do</h3> <p>We know we need more help, but we also know we need to be better community curators. So here are some things we pledge to do better:</p> <ol> <li>We’ll beef up the Prototype web site so that it’s easier to get started with the framework, easier to find great resources like <a href="" title="Scripteka :: Prototype extensions library">Scripteka</a> and <a href="" title="Prototype UI">Prototype UI</a>, and easier to find answers to common questions.</li> <li>We’ll give special attention to documentation tickets on Lighthouse so that our API docs don’t stay stale and thin.</li> <li>We’ll release on a more consistent schedule, as explained above.</li> <li>We’ll resume work on <a href="">PDoc</a> (inline documentation) and <a href="">Sprockets</a> (JS dependency management), spin-off projects that make Prototype more of a “platform.” They’ll be a boon to the Prototype ecosystem when they’re completed.</li> </ol> <p>Finally: if you consider yourself to be good at planning and organizing an open-source project, then we’d love your input on how to grow our community. Our highest priority, however, is not to launch a new initiative or process; it’s to get more people doing the seven things listed above.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2008-09-30:22022 2008-09-30T17:48:00Z 2008-09-30T17:49:27Z Prototype 1.6.0.3: A long-awaited bugfix release <p>Yesterday we released Prototype 1.6.0.3, the result of some much-needed bug fixes, and a stopgap release on the road to 1.6.1.</p> <p>Yesterday we released Prototype 1.6.0.3, the result of some much-needed bug fixes, and a stopgap release on the road to 1.6.1.</p> <p>It’s a backwards-compatible, drop-in replacement recommended for all users of Prototype 1.6. We’ve fixed 30 bugs and made 25 other improvements to our already-rock-solid library.</p> <p>Developers who follow along in Git might’ve noticed that the repository has seen <em>a lot</em>.</p> <p>Because of the way we handled this overhaul, those who try to update their Git working copies to the latest trunk will encounter conflicts, <em>even if they hadn’t made local changes</em>. </p> <p>Here’s how we recommend bringing your working copy up to date:</p> <ol> <li>First, if you’ve made any local changes, please create a new branch so that those changes aren’t lost.</li> <li><p>On your local master branch, run:</p> <pre><code>git fetch origin master git reset --hard 34ee207</code></pre> <p>The first line fetches the new commits without trying to apply them to your local copy. The second line resets your master branch to be in sync with the latest revision.</p> </li> <li>From there, you can cherry-pick from your branch any local commits you made (though you may have to do some manual merging).</li> </ol> <h3>Download, report bugs, and get help</h3> <ul> <li><a href="">Download Prototype 1.6.0.3</a></li> <li><a href="">Submit bug reports</a> to Lighthouse</li> <li><a href="">Get Prototype help</a> on the Prototype & script.aculo.us mailing list or #prototype <span class="caps">IRC</span> channel</li> <li><a href="">Interact with the Core Team</a> on the prototype-core mailing list</li> </ul> <p>As always, thanks to the core team and the many users who contributed bug reports and well-tested patches for this release.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Tobie tag:prototypejs.org,2008-08-11:21243 2008-08-11T13:48:00Z 2008-08-11T16:52:17Z Practical Prototype and script.aculo.us >Obviously, <a href=""><cite>Practical Prototype and script.aculo.us</cite></a> <em>how</em> but also the <em>why</em>. In the ruthless world of client-side development, that’s a serious asset!</p> <p><a href=""><cite>Practical Prototype and script.aculo.us</cite></a> is a pleasure to read – the style is both straightforward <em>and</em> witty – and should appeal to beginners and seasoned developers alike.</p> <p>If you want to try before you buy, you can always download a <a href="">sample chapter</a> or the <a href="">table of contents</a> from the Apress website. Or you can grab a hard copy and/or a pdf from the <a href="">Apress website</a> or from <a href="">Amazon</a>.</p> <p>As always, happy Prototyping!</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Christophe tag:prototypejs.org,2008-07-25:21197 2008-07-25T16:29:00Z 2008-09-03T07:35:31>9:50am</td> <td>Greeting</td> <td>Framework Summit Sponsor</td> </tr> <tr> <td>9:50. (But, hey, there’s a free lunch!)</li> <li><a href="">Register for The Ajax Experience</a>.</li> </ul> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Andrew tag:prototypejs.org,2008-07-14:21171 2008-07-14T15:51:00Z 2008-07-14T15:59:25Z Announcing new Prototype support mailing list <p>Subscribers to the Rails Spinoffs mailing list should switch to our new, better-named list: <a href="" title="Prototype & script.aculo.us | Google Groups">Prototype & script.aculo.us</a>.</p> <p>Subscribers to the Rails Spinoffs mailing list should switch to our new, better-named list: <a href="" title="Prototype & script.aculo.us | Google Groups">Prototype & script.aculo.us</a>.</p> <p>While these two venerable libraries are, in truth, spinoffs of the Rails project, we’ve come to realize it’s far more user-friendly to have the libraries’ names in the name of the mailing list. This should help guide users to the right spot and reduce the amount of support traffic on the <a href="" title="Prototype: Core | Google Groups">Prototype Core mailing list</a> — which is for discussion of Prototype’s development process, not support.</p> <p>Because list spam is a sad reality, your first post to the list will be held for moderation. Once it’s approved, though, you’ll be able to post with impunity.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Tobie tag:prototypejs.org,2008-06-11:20683 2008-06-11T21:58:00Z 2008-11-10T07:53:35Z An Interview with Ryan Johnson<b>Hi, Ryan. Could you please introduce yourself.</b></p> <p><span class="interviewee">Ryan Johnson:</span> I’ve been writing web pages since 1993, but I’ve only felt comfortable calling myself a programmer for the last 5 years. I drank the Prototype kool-aid about 2 and a half years ago, and I’d say today nearly 75% of all the code I write is JavaScript. I enjoy writing Ruby just as much, but fewer and fewer people are asking me to do any work in Ruby at all.</p> <p>The language itself hasn’t evolved all that much, but watching our collective knowledge and understanding of it grow has been a surprise and delight.</p> <p class="question"><b>You just released a new web application: PersonalGrid. Can you tell us about it?</b></p> <p><img src="/assets/2008/6/11/personalgrid.png" /> <span class="interviewee">RJ:</span> <a href="">PersonalGrid</a> is a file-sharing and publishing application that I’ve written. You can use it to upload files and publish single files or whole folders with one click. It’s also easy to share with friends or whole groups of users.</p> <p>We have a dev team of one (me), and this is our first beta release, so hopefully any bugs you encounter won’t be too catastrophic.</p> <p class="question"><b>How are you using Prototype in PersonalGrid? And script.aculo.us?</b></p> <p><span class="interviewee">RJ:</span> On the Rails side, 95% of the actions use a REST interface and spit back <a href="">JSON</a>, so the app is very client-heavy. Almost all of the HTML is generated with the Prototype <a href=" “Prototype JavaScript framework: Template”">Template</a> class, with a little usage of the <a href=""><code>new Element</code></a> syntax thrown in where appropriate. I used the Draggables and Droppables from script.aculo.us, but little else. I also ended up making many modifications (which I will release on <a href="">GitHub</a> sometime soon) to both of those classes to support some extra functionality.</p> <p>Both Object.Event and LivePipe UI are discussed below, but those libraries are integral to the application. So our JavaScript stack looks like Prototype → LivePipe UI → PersonalGrid Application. The actual PersonalGrid JavaScript code is a number of classes that represent the major UI components (File, Folder, Friend, Group, etc), and a number of controller classes that initiate Ajax requests and process the JSON responses.</p> <p>In the process of building the application layer I kept seeing inklings of a JavaScript MVC framework, but I just don’t see where the reusability would come from. Prototype is ever more awesome, and I’m releasing components that others will hopefully find useful.</p> <p class="question"><b>What were the biggest challenges you faced when building it?</b></p> <p><span class="interviewee">RJ:</span> Internet Explorer. The only debate about the IE debacle that should be going on is whether the product is a result of incompetence or was designed deliberately to sabotage the development of complex web applications. They got XHR and the mouseenter/mouseleave events right, but that is about it.</p> <p class="question"><b>Any technical advice, tips, or tricks you’d like to share?</b></p> <p><span class="interviewee">RJ:</span> Start using a broadcast/subscription based event model for everything in your app, not just Element objects! Of course I am going to plug my own solution <a href="">Object.Event</a> — but whether or not you use that, having a system where you can trigger your own events that do not relate to the DOM is critical for the maintainability of your code base. The new custom events in Prototype 1.6 are great (and I used a few in PersonalGrid), but it’s still geared towards the DOM.</p> <p>For example, we have a trash can feature in PersonalGrid. Each user’s root directory has a <code>.Trash</code> folder, which mostly acts like any other folder, but we need to specialize its behavior. The two biggest differences are that we want to take the <code>.Trash</code> folder out of the normal directory listing, and give it a special place in the UI. We also want <code>.Trash</code> to behave differently when you are in it. We have a Location class that is responsible for changing folders, rending the directory listing, etc. Instead of putting these specializations for the Trash inside the Location class, we have the Location class fire an <code>onChangeLocation</code> event, which the Trash class observes.</p> <p>It’s not only a conceptually elegant way to solve the problem, but you get the added benefit of having all of the code that relates to the Trash in one place. As I was developing the app, we ended up wanting all of these little specializations for friendship folders, group folders, etc, so the broadcast/subscription model has really ended up paying huge dividends as the project progresses.</p> <p class="question"><b>You’re using a Java applet for file upload. Why did you choose to use that technology? What are the advantages over using flash?</b></p> <p><span class="interviewee">RJ:</span> There are some problems with the delay in loading the JVM, and the whole certificate/trust issues that all applets have, so I don’t want to sound too triumphant about the choice just yet. The main reason I choose Java instead of Flash is that you can drag and drop files onto the applet, which Flash does not support. Leopard supports dragging files directly onto file inputs, but users do not universally expect that behavior yet.</p> <p>One of the areas I’d like to explore more is deep interaction between Java/Flash and JavaScript. The Java applet is one of the few parts of PersonalGrid that I didn’t write, but I worked closely with our Java coder to create a large series of JavaScript callbacks inside the applet so I could build a UI with Prototype.</p> <p><img src="/assets/2008/6/11/personalgrid_3_1.jpg" alt="Java applet uploader screenshot" /></p> <p>There are a lot of fairly hairy undocumented bugs with <a href="">LiveConnect</a> (the Java/JS bridge), but until we get richer native functionality this is the only way to get around some of the security constrains browsers place on accessing the local machine.</p> <p class="question"><b>You’re well known within the Prototype community for Control Suite. Can you tell us a bit more about it? Are you using any of it in PersonalGrid?</b></p> <p><span class="interviewee">RJ:</span> Well I’d like to apologize to the users of Control Suite for neglecting it for the past 8 months! PersonalGrid and some other obligations really destroyed my schedule. Control Suite has just received a major update, and is now called <a href="">LivePipe UI</a> and is compatible with Prototype 1.6. Most of the complex UI elements you see in PersonalGrid (windows, context menus, selection, etc) are available in the new LivePipe UI release.</p> <p>LivePipe UI tries to provide a set of reusable core UI components that has a similar API design philosophy to Prototype. So far only components I have needed are part of the kit, but I am hoping that it grows with time. Now that it is on GitHub I’m hoping that it will be easier for users to contribute. The biggest news to existing users is that the Control.Modal class has been completely rewritten, and it is now a subclass of Control.Window. There are also proper Lightbox and Tooltip classes. The new class system in Prototype 1.6 made that far more elegant than it would have been before.</p> <p class="question"><b>PersonalGrid has a distinct Mac feel. What made you aim for a desktop-like application?</b></p> <p><span class="interviewee">RJ:</span> Since an application of this nature is all about finding and organizing files and folders, why not recreate an interface people are already completely familiar with? We have a ways to go to catch up feature wise to <a href="">Box.net</a>, but when I first used their service I noticed they used some desktop metaphors (like drag and drop), but overall the application still felt too much like a website. Plenty of web services <strong>should</strong> feel like websites, but I don’t think file management apps should (except for <a href="">Drop.io</a>, which is wonderfully simple).</p> <p><img src="/assets/2008/6/11/personalgrid_2_1.jpg" alt="PersonalGrid screenshot" /></p> <p>With regards to the Mac feel… besides borrowing some of the icons (still wondering if we will hear from Apple legal), there are a lot of very particular things that I like about the Finder. One of the hidden features of the PersonalGrid UI is that if you pick up an item and hover over any folder, breadcrumb, group or friendship, you will navigate to that location, and you will still be able to drop the item in any sub folder at the new location. The Finder does this, but I rarely use it because you can have multiple Finder windows open, or use the column view. In a two paned interface it’s the only way to elegantly get an item from A to C without moving it to B first.</p> <p>Rich web development is still in its infancy, but Apple (and others) have had many complex UI problems elegantly solved for years on the desktop, so when I would run into a brick wall like the A-to-C problem, I would see how it was solved in the Finder, or even read the documentation in the <a href="">Human Interface Guidelines</a>.</p> <p>It was also an amazingly fun challenge to deconstruct and recreate something as basic as the selection for the new <a href="">Control.Selection</a> library, which is also one of the core components of the PersonalGrid UI. When building something that complex yet fundamental one realizes all of the tweaks that coders and designers before you have thought obsessively about.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Tobie tag:prototypejs.org,2008-05-28:20739 2008-05-28T23:21:00Z 2008-05-29T00:14:19Z An Interview with Piotr Dachtera<b>Hi, Piotr. Could you please introduce yourself?</b></p> <p><span class="interviewee">Piotr Dachtera:</span> Sure. I’m currently the lead developer of the JavaScript/Ajax/Comet part of <a href="">chess.com</a>.</p> <p>I’ve been dedicated to Web + chess applications since 1999. I’ve also been working on business software since 2000 (mainly <a href="">Java</a>). My interest in the game of chess was always pushing me forward and finally I think I can say I’m working on the number one web chess project.</p> <p class="question"><b>You’ve built <a href="">Live Chess</a> using Prototype and <a href="">script.aculo.us</a>. Can you tell us more about the application?</b></p> <p><span class="interviewee">PD:</span> Chess community website users need to share things using a specific “language” which needs something more than plain text. They need to share game positions, whole chess games with analysis, and chess puzzles. Also, they want to play against each other.</p> <p><img src="/assets/2008/5/28/chess.jpg" alt="Screenshot of Live Chess" /></p> <p>As we were starting to work on it all, I already had some experience with Prototype and script.aculo.us, and it was the natural choice.</p> <p>We started with the interactive boards with draggable pieces, chess game parsers and things like that. Naturally, I had in mind that we were going to build something much more complex (the scalable real-time play server), so enclosing everything in reusable classes was the only solution. As it was always tempting to see what’s inside Prototype, I was investigating its source all the time and I was trying to build my classes using the same style.</p> <p class="question"><b>Live Chess uses Comet to keep the chessboard synchronized. What made you choose that technology over ordinary Ajax?</b></p> <p><span class="interviewee">PD:</span> The most important word in “Live Chess” is “live.” We need things to happen instantly. We can’t use polling to check every 10 seconds if anything changed.</p> <p>If people want to play a game of chess in 2 minutes, they need some kind of <em>instant</em> communication.</p> <p class="question"><b>Can you give us more details on the Comet implementation?</b></p> <p><span class="interviewee">PD:</span> Working on my own proof-of-concept chess server in 2005, I “(re)invented” the Comet idea to allow this kind of communication… only to find out that people were using the same idea in simple chat apps.</p> <p>The next step (which came with the new server built for the chess.com community and needed real scalability) was to use the idea of thread-less server solutions implemented in <a href="">Jetty</a> server and <a href="">ActiveMQ</a> to push messages between client and server. </p> <p>Finally, we switched to <a href="">Cometd/Bayeux</a> with our own solution for guaranteed messaging and message ordering.</p> <p>In all of these solutions, there was always Java on the server-side.</p> <p class="question"><b>What are you using Prototype for?</b></p> <p><span class="interviewee">PD:</span> I started with script.aculo.us effects investigation which guided me directly to Prototype.</p> <p>Currently, I’m not really a JavaScript developer. I’m a Prototype developer using the library everywhere.</p> <p class="question"><b>On top of Prototype and <a href="">Dojo</a>, I saw you were also using <a href="">ExtJS</a> in Live Chess. Was the integration of these three libraries seamless or were there issues?</b></p> <p><span class="interviewee">PD:</span> We are using ExtJS with the <a href="">Prototype adapter</a> so there was nothing really hard to do here.</p> <p>Dojo things are completely separated from the rest of the system, but we had to use the library for Cometd communication. As a Prototype fan, I would be really happy to have a Cometd implementation built on top of Prototype, but currently I had to use Dojo in this area (which is sandboxed in a separate communication frame).</p> <p class="question"><b>What were the biggest challenges you faced with this application?</b></p> <p><span class="interviewee">PD:</span> We still have lots of big challenges! But if I had to choose one, I would say: performance. Everybody wants to use Live Chess (and any other similar application) as if it was a desktop application. As the system becomes heavier and more complex, we still have to work hard to keep it <em>smooth</em>.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> sam tag:prototypejs.org,2008-05-27:20728 2008-05-27T19:06:00Z 2008-05-27T20:10:17Z Prototype hosted on Google's servers <p>Good news! Google now offers a cached, compressed copy of Prototype on its high-speed content distribution network via the <a href=""><span class="caps">AJAX</span> Libraries <span class="caps">API</span></a>.</p> <p>Good news! Google now offers a cached, compressed copy of Prototype on its high-speed content distribution network via the <a href=""><span class="caps">AJAX</span> Libraries <span class="caps">API</span></a>.</p> <p>You can either link to the source code directly:</p> <pre> <script type="text/javascript" src=""></script> </pre> <p>Or you can use Google’s <span class="caps">API</span>:</p> <pre> <script type="text/javascript" src=""></script> <script type="text/javascript">google.load("prototype", "1.6.0.2");</script> </pre> <p>More information is available from <a href="">Google’s documentation</a>.</p> <p>When a specific version of Prototype is delivered to your browser, it will be cached for one year and served with the proper compression headers. That means that most users of sites which link to Google’s copy of Prototype will incur a ~30 KB download only once.</p> <p>We typically encourage developers building applications with Prototype to concatenate all their JavaScript into a single file, and serve that file with the proper content expiration and compression settings. In cases where this is unfeasible, Google’s hosted version is an excellent alternative.</p> <p>Special thanks to Dion Almaer and the team at Google for making this possible.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div> Tobie tag:prototypejs.org,2008-05-21:20702 2008-05-21T06:01:00Z 2008-05-22T02:13:20Z An Interview with Amy Hoy<b>Amy, you’ve been involved in both the Prototype/<a href="">script.aculo.us</a> and <a href="">Ruby on Rails</a> communities since their inception. What was your background? What brought you there?</b></p> <p><span class="interviewee">Amy Hoy:</span> I admit it — I was a professional <a href="" title="PHP: Hypertext Preprocessor">PHP</a>.</p> <p>I had found out about <a href="">Basecamp</a> from a designer’s community I was part of, waaay back when it was in beta testing. It really stood out in my mind. Later, somebody else posted about this thing they used to build Basecamp, so of course I was intrigued.</p> <p.</p> <p>Somebody felt my pain! And it was awesome. I feel like this could be a meaty episode of Dr. Phil.</p> <p>Script.aculo.us I learned about through Rails, and naturally it appealed to me because of all the visual possibilities.</p> <p class="question"><b>Although you’re not teaching JavaScript <i>per se</i>, you’ve always been inclined in facilitating the learning process, whether it be through your <a href="">blog</a>, <a href="">presentations</a> or <a href="">cheat-sheets</a>. What’s with that?</b></p> <p><span class="interviewee">AH:</span>. </p> <p>And Javascript, well. JavaScript is a really painful subject to learn from scratch. </p> <p.</p> <p.</p> <p>While the whole experience was ultimately exhilarating, the lack of good learning sources meant it was also hair-tearingly frustrating. So I set out to make some.</p> <p>I can say from experience that it takes a lot of hard work and analysis to create a truly great tech education resource. And traditional book publishers don’t pay well for the effort, either. It’s not at all surprising that there’s no One True Source. </p> <p>But… my script.aculo.us cheat sheet has been downloaded over 500,000 times. Clearly I’m not the only one who sees a problem. </p> <p class="question"><b>In the same vein, you’ve teamed-up with <a href="">Thomas Fuchs</a> (creator of script.aculo.us and Prototype Core committer) to write an ebook on JavaScript “basics”.</b></p> <p><span class="interviewee">AH:</span>?)</p> <p. </p> <p>If people do like it, we’ll continue the series: the DOM, effects, compatibility, the works. </p> <p><img title="Amy Hoy and Thomas Fuchs" src="/assets/2008/5/22/amy_thomas_3_1.jpg" alt="Amy and Thomas" /></p> <p. </p> <p>And last year, I did a shorter talk on Object#prototype and Prototype, and that was very well-attended too.</p> <p>So the need was there two years ago and I’m convinced that it’s still here. </p> <p class="question"><b>You <a href="">recently mentioned</a> finding API docs “almost universally frustrating”, and looking into new, somewhat self-reflexive solutions for documentation. Can you elaborate?</b></p> <p><span class="interviewee">AH:</span> Without turning this into a whinefest? Certainly!</p> <p>Having great API docs is one of the best ways that people can promote the usage of their open source projects. I, for one, use the prototypejs.org site constantly when working in JavaScript. </p> <p.</p> <p>So, given those arguments… how could we make something better? I’d been thinking about this a lot and one night over a couple of beers in Vienna, Thomas and I came up with explain(). </p> <p>It’s still under development, but the basic idea is that you’ll be able to call the <code>explain()</code>.</p> <p>This of course will be an add-on for development, not something that would be deployed in the standard package. </p> <p class="question"><b>You also seem to practice thinking outside the box in your own work. <a href="">Twistori</a>, which you’ve just released in collaboration with Thomas Fuchs, is a great example of that. Other than being immediately distinguishable by its stunning looks, it’s not your average mash-up.</b></p> <p><span class="interviewee">AH:</span> Thanks.</p> <p>We’ve both been doing a lot of intranet and corporate software and the morning I decided I had to ship something, I think we were both about to pop with creative frustration.</p> <p <a href="" title="PostSecret">PostSecret</a>, and a lot like the normal, everyday cacophony of life. </p> <p><img title="Twistori screenshot" src="/assets/2008/5/21/twistori_1.png" alt="screenshot of Twistori" /></p> <p>Every choice in the design is intentional—the flow, the colors, the way the messages are formatted, the anonymity, and the lack of ability to “scroll back” when a message has gone by. This all contributes to the effect of what one person called “the river of humanity.” </p> <p! </p> <p.</p> <p>I think the biggest lesson there is that humans don’t really change with technology. We still seek connections, and we are still voyeuristic and interested in other people, and if you can make design decisions with that in mind, people will be affected by your work.</p> <p class="question"><b>Anything else you’d like to add?</b></p> <p><span class="interviewee">AH:</span> I like to dabble. A lot. In everything! I like to figure out where the pain’s coming from and try to fix it, and surprisingly, it tends to work out.</p> ).</p> <p>Lots of really unbelievable things are coming out now (like John Resig’s <a href="">processing.js</a> and <a href="">script.aculo.us 2.0</a>) and it’s not likely to stop any time soon. Things are going to get even better. I’m really excited to be a part of it.</p> <div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div>
http://feeds.feedburner.com/prototype-blog
crawl-002
refinedweb
10,300
56.45
#include <stdint.h> #include <rte_common.h> #include <rte_meter.h> Go to the source code of this file. RTE Generic Traffic Manager API This interface provides the ability to configure the traffic manager in a generic way. It includes features such as: hierarchical scheduling, traffic shaping, congestion management, packet marking, etc. Definition in file rte_tm.h. Ethernet framing overhead. Overhead fields per Ethernet frame: One of the typical values for the pkt_length_adjust field of the shaper profile. Definition at line 45 of file rte_tm.h. Ethernet framing overhead including the Frame Check Sequence (FCS) field. Useful when FCS is generated and added at the end of the Ethernet frame on Tx side without any SW intervention. One of the typical values for the pkt_length_adjust field of the shaper profile. Definition at line 57 of file rte_tm.h. Invalid WRED profile ID. Definition at line 66 of file rte_tm.h. Invalid shaper profile ID. Definition at line 75 of file rte_tm.h. Node ID for the parent of the root node. Definition at line 82 of file rte_tm.h. Node level ID used to disable level ID checking. Definition at line 89 of file rte_tm.h. Congestion management (CMAN) mode This is used for controlling the admission of packets into a packet queue or group of packet queues on congestion. On request of writing a new packet into the current queue while the queue is full, the tail drop algorithm drops the new packet while leaving the queue unmodified, as opposed to head drop algorithm, which drops the packet at the head of the queue (the oldest packet waiting in the queue) and admits the new packet at the tail of the queue.. Definition at line 960 of file rte_tm.h. Verbose error types. Most of them provide the type of the object referenced by struct rte_tm_error::cause. Definition at line 1176 of file rte_tm.h. Traffic manager get number of leaf nodes Each leaf node sits on top of a Tx queue of the current Ethernet port. Therefore, the set of leaf nodes is predefined, their number is always equal to N (where N is the number of Tx queues configured for the current port) and their IDs are 0 .. (N-1). Traffic manager node ID validate and type (i.e. leaf or non-leaf) get The leaf nodes have predefined IDs in the range of 0 .. (N-1), where N is the number of Tx queues of the current Ethernet port. The non-leaf nodes have their IDs generated by the application outside of the above range, which is reserved for leaf nodes. Traffic manager capabilities get Traffic manager level capabilities get Traffic manager node capabilities get Traffic manager WRED profile add Create a new WRED profile with ID set to wred_profile_id. The new profile is used to create one or several WRED contexts. Traffic manager WRED profile delete Delete an existing WRED profile. This operation fails when there is currently at least one user (i.e. WRED context) of this WRED profile. Traffic manager shared WRED context add or update When shared_wred_context_id is invalid, a new WRED context with this ID is created by using the WRED profile identified by wred_profile_id. When shared_wred_context_id is valid, this WRED context is no longer using the profile previously assigned to it and is updated to use the profile identified by wred_profile_id. A valid shared WRED context can be assigned to several hierarchy leaf nodes configured to use WRED as the congestion management mode. Traffic manager shared WRED context delete Delete an existing shared WRED context. This operation fails when there is currently at least one user (i.e. hierarchy leaf node) of this shared WRED context. Traffic manager shaper profile add Create a new shaper profile with ID set to shaper_profile_id. The new shaper profile is used to create one or several shapers. Traffic manager shaper profile delete Delete an existing shaper profile. This operation fails when there is currently at least one user (i.e. shaper) of this shaper profile. Traffic manager shared shaper add or update When shared_shaper_id is not a valid shared shaper ID, a new shared shaper with this ID is created using the shaper profile identified by shaper_profile_id. When shared_shaper_id is a valid shared shaper ID, this shared shaper is no longer using the shaper profile previously assigned to it and is updated to use the shaper profile identified by shaper_profile_id. Traffic manager shared shaper delete Delete an existing shared shaper. This operation fails when there is currently at least one user (i.e. hierarchy node) of this shared shaper. Traffic manager node add Create new node and connect it as child of an existing node. The new node is further identified by node_id, which needs to be unused by any of the existing nodes. The parent node is identified by parent_node_id, which needs to be the valid ID of an existing non-leaf node. The parent node is going to use the provided SP priority and WFQ weight to schedule its new child node. This function has to be called for both leaf and non-leaf nodes. In the case of leaf nodes (i.e. node_id is within the range of 0 .. (N-1), with N as the number of configured Tx queues of the current port), the leaf node is configured rather than created (as the set of leaf nodes is predefined) and it is also connected as child of an existing node. The first node that is added). Further restrictions for root node: needs to be non-leaf, its private shaper profile needs to be valid and single rate, cannot use any shared shapers. delete Delete an existing node. This operation fails when this node currently has at least one user (i.e. child node). suspend Suspend an existing node. While the node is in suspended state, no packet is scheduled from this node and its descendants. The node exits the suspended state through the node resume operation. Traffic manager node resume Resume an existing node that is currently in suspended state. The node entered the suspended state as result of a previous node suspend operation. Traffic manager hierarchy commit This function is called during the port initialization phase (before the Ethernet port is started) to freeze the start-up hierarchy. This function typically performs the following steps: a) It validates the start-up hierarchy that was previously defined for the current port through successive rte_tm_node_add() invocations; b) Assuming successful validation, it performs all the necessary port specific configuration build from scratch (when clear_on_fail is enabled) or by modifying the existing hierarchy configuration (when clear_on_fail is disabled). Note that this function can still fail due to other causes (e.g. not enough memory available in the system, etc), even though the specified hierarchy is supported in principle by the current port. Traffic manager node parent update This function may be used to move a node and its children to a different parent. Additionally, if the new parent is the same as the current parent, this function will update the priority/weight of an existing node. Restriction for root node: its parent cannot be changed. This function can only be called after the rte_tm_hierarchy_commit() invocation. Its success depends on the port support for this operation, as advertised through the port capability set. Traffic manager node private shaper update Restriction for the root node: its private shaper profile needs to be valid and single rate. Traffic manager node shared shapers update Restriction for root node: cannot use any shared rate shapers. Traffic manager node enabled statistics counters update Traffic manager node WFQ weight mode update Traffic manager node congestion management mode update Traffic manager node private WRED context update Traffic manager node shared WRED context update Traffic manager node statistics counters read Traffic manager packet marking - VLAN DEI (IEEE 802.1Q) IEEE 802.1p maps the traffic class to the VLAN Priority Code Point (PCP) field (3 bits), while IEEE 802.1q maps the drop priority to the VLAN Drop Eligible Indicator (DEI) field (1 bit), which was previously named Canonical Format Indicator (CFI). All VLAN frames of a given color get their DEI bit set if marking is enabled for this color; otherwise, their DEI bit is left as is (either set or not). Traffic manager packet marking - IPv4 / IPv6 ECN (IETF RFC 3168) IETF RFCs 2474 and 3168 reorganize the IPv4 Type of Service (TOS) field (8 bits) and the IPv6 Traffic Class (TC) field (8 bits) into Differentiated Services Codepoint (DSCP) field (6 bits) and Explicit Congestion Notification (ECN) field (2 bits). The DSCP field is typically used to encode the traffic class and/or drop priority (RFC 2597), while the ECN field is used by RFC 3168 to implement a congestion notification mechanism to be leveraged by transport layer protocols such as TCP and SCTP that have congestion control mechanisms. When congestion is experienced, as alternative to dropping the packet, routers can change the ECN field of input packets from 2'b01 or 2'b10 (values indicating that source endpoint is ECN-capable) to 2'b11 (meaning that congestion is experienced). The destination endpoint can use the ECN-Echo (ECE) TCP flag to relay the congestion indication back to the source endpoint, which acknowledges it back to the destination endpoint with the Congestion Window Reduced (CWR) TCP flag.. Traffic manager packet marking - IPv4 / IPv6 DSCP (IETF RFC 2597) IETF RFC 2597 maps the traffic class and the drop priority to the IPv4/IPv6 Differentiated Services Codepoint (DSCP) field (6 bits). Here are the DSCP values proposed by this RFC: Class 1 Class 2 Class 3 Class 4 +----------+----------+----------+----------+ Low Drop Prec | 001010 | 010010 | 011010 | 100010 | Medium Drop Prec | 001100 | 010100 | 011100 | 100100 | High Drop Prec | 001110 | 010110 | 011110 | 100110 | +----------+----------+----------+----------+ There are 4 traffic classes (classes 1 .. 4) encoded by DSCP bits 1 and 2, as well as 3 drop priorities (low/medium/high) encoded by DSCP bits 3 and 4..
https://doc.dpdk.org/api-22.07/rte__tm_8h.html
CC-MAIN-2022-40
refinedweb
1,657
62.48
Date: Tue, 14 Nov 2000 14:30:59 -0800 From: FreeBSD Security Advisories <security-advisories@FREEBSD.ORG> Subject: FreeBSD Security Advisory: FreeBSD-SA-00:69.telnetd To: BUGTRAQ@SECURITYFOCUS.COM -----BEGIN PGP SIGNED MESSAGE----- ============================================================================= FreeBSD-SA-00:69 Security Advisory FreeBSD, Inc. Topic: telnetd allows remote system resource consumption. Category: core Module: telnetd Announced: 2000-11-14 Credits: Jouko Pynnonen <jouko@SOLUTIONS.FI> Affects: FreeBSD 3.x (all releases), FreeBSD 4.x (all releases prior to 4.2), FreeBSD 3.5.1-STABLE and 4.1.1-STABLE prior to the correction date. Corrected: 2000-10-30 (FreeBSD 4.1.1-STABLE) 2000-11-01 (FreeBSD 3.5.1-STABLE) FreeBSD only: NO I. Background telnetd is the server for the telnet remote login protocol. II. Problem Description The telnet protocol allows for UNIX environment variables to be passed from the client to the user login session on the server. However, some of these environment variables have special meaning to the telnetd child process itself and may be used to affect its operation. Of particular relevance is the ability for remote users to cause an arbitrary file on the system to be searched for termcap data by passing the TERMCAP environment variable. Although any file on the local system can be read since the telnetd server runs as root, the contents of the file will not be reported in any way to the remote user unless it contains a valid termcap entry, in which case the corresponding termcap sequences will be used to format the output sent to the client. It is believed there is no risk of data disclosure through this vulnerability. However, an attacker who forces the server to search through a large file or to read from a device can cause resources to be spent by the server, including CPU cycles and disk read bandwidth, which can increase the server load and may prevent it from servicing legitimate user requests. Since the vulnerability occurs before the login(1) utility is spawned, it does not require authentication to a valid account on the server in order to exploit. Remote users without a valid login account on the server can cause resources such as CPU and disk read bandwidth to be consumed, causing increased server load and possibly denying service to legitimate users./libexec/telnetd # patch -p < /path/to/patch_or_advisory # make depend && make all install Patch for vulnerable systems: Index: sys_term.c =================================================================== RCS file: /mnt/ncvs/src/libexec/telnetd/sys_term.c,v retrieving revision 1.24 retrieving revision 1.25 diff -u -r1.24 -r1.25 --- sys_term.c 1999/08/28 00:10:24 1.24 +++ sys_term.c 2000/10/31 05:29:54 1.25 @@ -1799,6 +1799,13 @@ strncmp(*cpp, "_RLD_", 5) && strncmp(*cpp, "LIBPATH=", 8) && #endif + strncmp(*cpp, "LOCALDOMAIN=", 12) && + strncmp(*cpp, "RES_OPTIONS=", 12) && + strncmp(*cpp, "TERMINFO=", 9) && + strncmp(*cpp, "TERMINFO_DIRS=", 14) && + strncmp(*cpp, "TERMPATH=", 9) && + strncmp(*cpp, "TERMCAP=/", 9) && + strncmp(*cpp, "ENV=", 4) && strncmp(*cpp, "IFS=", 4)) *cpp2++ = *cpp; } Index: telnetd.c =================================================================== RCS file: /mnt/ncvs/src/libexec/telnetd/telnetd.c,v retrieving revision 1.22 retrieving revision 1.23 diff -u -r1.22 -r1.23 --- telnetd.c 2000/01/25 14:52:00 1.22 +++ telnetd.c 2000/10/31 05:29:54 1.23 @@ -811,7 +811,7 @@ fatal(net, "Out of ptys"); if ((pty = open(lp, 2)) >= 0) { - strcpy(line,lp); + strlcpy(line,lp,sizeof(line)); line[5] = 't'; break; } @@ -1115,7 +1115,7 @@ IM = Getstr("im", &cp); IF = Getstr("if", &cp); if (HN && *HN) - (void) strcpy(host_name, HN); + (void) strlcpy(host_name, HN, sizeof(host_name)); if (IF && (if_fd = open(IF, O_RDONLY, 000)) != -1) IM = 0; if (IM == 0) Index: utility.c =================================================================== RCS file: /mnt/ncvs/src/libexec/telnetd/utility.c,v retrieving revision 1.13 retrieving revision 1.14 diff -u -r1.13 -r1.14 --- utility.c 1999/08/28 00:10:25 1.13 +++ utility.c 2000/10/31 05:29:54 1.14 @@ -330,7 +330,7 @@ { char buf[BUFSIZ]; - (void) sprintf(buf, "telnetd: %s.\r\n", msg); + (void) snprintf(buf, sizeof(buf), "telnetd: %s.\r\n", msg); (void) write(f, buf, (int)strlen(buf)); sleep(1); /*XXX*/ exit(1); @@ -343,7 +343,7 @@ { char buf[BUFSIZ], *strerror(); - (void) sprintf(buf, "%s: %s", msg, strerror(errno)); + (void) snprintf(buf, sizeof(buf), "%s: %s", msg, strerror(errno)); fatal(f, buf); } -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.4 (FreeBSD) Comment: For info see iQCVAwUBOhG9KFUuHi5z0oilAQHUZwP/Xmo3EDteE4HwZovAO6UFzNtc3xVsFaUr Thf5XvpPThIOKmyYsUOL/kRbfnU3vJUdPA21uDYKyUEil5+x8+ZAuDzJXfMxHwu8 MMD1/d5QFfvuWN5W+/msdT7XKEjTmm4f09/tMxRAEyIMeKRj2H4gWxEGmaivJtvT 6bFKtbsSW1Q= =UltL -----END PGP SIGNATURE-----
http://lwn.net/2000/1116/a/sec-freebsd-telnetd.php3
crawl-003
refinedweb
747
51.44
Minimal CLI construction with Click Click is an excellent library that handles a lot of the minutate in setting up a robust Command-Line Interface. There’s a TON of functionality built in, but I’m writing this notebook so I can remember how to set up straight-forward implementations, such as the one found in my library kneejerk from IPython.display import Image Image('./images/kneejerk.PNG') A Minimal Example I whipped up a small script that provides an interace to either: - Convert an ASCII number to its corresponding character - Convert a character to its corresponding ASCII number Here it is in its entirety !cat dumb_cli.py import click @click.group() def main(): pass @main.command(help='Take a letter and print its corresponding ascii number') @click.argument('letter') def l2n(letter): print(ord(letter)) @main.command(help='Take an ascii number and print its corresponding letter') @click.option('--upper', '-u', help='Print upper-case letter', default=False) @click.argument('number') def n2l(number, upper): number = int(number) if upper: print(chr(number).upper()) else: print(chr(number).lower()) if __name__ == '__main__': main() Multiple Commands Because we have two functions, we want to have two top-level commands: l2n and n2l. This is achieved by using the @click.group() decorator on top of a throwaway function, main(), which creates a new dectorator. We use the @main.command() decorator on top of our l2n() and n2l() functions. !python dumb_cli.py Usage: dumb_cli.py [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: l2n Take a letter and print its corresponding ascii number n2l Take an ascii number and print its corresponding letter Customizing Each Command Notice that in the @main.command() call, we provide a help= argument, which gives us the nice printout. From here, we supply the command line options and arguments for each command. Few things to point out: - We can provide both the long-form --and short-form -flags - We can set default values - We can put help statements for each option !python dumb_cli.py n2l --help Usage: dumb_cli.py n2l [OPTIONS] NUMBER Take an ascii number and print its corresponding letter Options: -u, --upper TEXT Print upper-case letter --help Show this message and exit. Decorator Organization Finally, one last note on the execution order. Copy/pasting the decorator and defnition of n2l(): @main.command(help='Take an ascii number and print its corresponding letter') @click.option('--upper', '-u', help='Print upper-case letter', default=False) @click.argument('number') def n2l(number, upper): The decorators execute from the bottom up, supplying function arguments from left to right. Be very careful that these align!
https://napsterinblue.github.io/notes/python/development/minimal_cli/
CC-MAIN-2021-04
refinedweb
439
50.94
Hi! On Thu, Jun 12, 2008 at 4:24 PM, viz06 <vijaykumarsharma_1999@yahoo.com> wrote: > 1. I am not able to register the namespace 'mypc' and have to manually > modify ns_reg.properties to set the namespace. Could you elaborate on not being able to register the namespace? Do you get an exception? If it is a configuration problem with spring jcr modules, please ask on the according spring mailing list, since the Jackrabbit project does not develop any code for Spring/JCR itself. > 2. I have setup few spring enabled test cases which run to completion and > creates the correct node structure, when I run the same test second time > around all my previous nodes shows a different namespace prefix of 'fn'. I > am unable to figure out from where it is getting this namespace prefix. If this is related to 1., it might be because you modified the ns_reg.properties directly. Jackrabbit stores indices to namespaces inside the persisted data, so messing with the properties files can actually switch the entire namespace for all nodes of a namespace. Regards, Alex -- Alexander Klimetschek alexander.klimetschek@day.com
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200806.mbox/%3Cc3ac3bad0806120815v5c9cfd5dud68e96d6faf0825@mail.gmail.com%3E
CC-MAIN-2016-18
refinedweb
187
66.03
Button in C# A Button is an essential part of an application, or software, or webpage. It allows the user to interact with the application or software. For example, if a user wants to exit from the current application so, he/she click the exit button which closes the application. It can be used to perform many actions like to submit, upload, download, etc. according to the requirement of your program. It can be available with different shape, size, color, etc. and you can reuse them in different applications. In .NET Framework, Button class is used to represent windows button control and it is inherited from ButtonBase class. It is defined under System.Windows.Forms namespace. In C# you can create a button on the windows form by using two different ways: 1. Design-Time: It is the easiest method to create a button. Use the below steps: - properties of the Button. 2. Run-Time: It is a little bit trickier than the above method. In this method, you can create your own Button using the Button class. - Step 1: Create a button using the Button() constructor is provided by the Button class. // Creating Button using Button class Button MyButton = new Button(); - Step 2: After creating Button, set the properties of the Button provided by the Button class. // Set the location of the button Mybutton.Location = new Point(225, 198); // Set text inside the button Mybutton.Text = "Submit"; // Set the AutoSize property of the button Mybutton.AutoSize = true; // Set the background color of the button Mybutton.BackColor = Color.LightBlue; // Set the padding of the button Mybutton.Padding = new Padding(6); // Set font of the text present in the button Mybutton.Font = new Font("French Script MT", 18); - Step 3: And last add this button control to form using Add() method. // Add this Button to form this.Controls.Add(Mybutton); - Example: CSharp - Output: Important Properties of Button Important Events on Button
https://www.geeksforgeeks.org/button-in-c-sharp/?ref=lbp
CC-MAIN-2021-49
refinedweb
320
59.19
Important: Please read the Qt Code of Conduct - Looking for help with QPrinter Hello there. I'm struggling to get QPrinter to work in my project and I'm hoping someone can point me in the right direction. When I build the project I get "'QPrinter' was not declared in this scope". That's basically all I get for errors. Within my .cpp file where I'm making a QPrinter object I have: #include <QPrinter> In my .pro file I have: QT += printsupport Does anyone have any tips around what I can try to debug this? - J.Hilk Moderators last edited by @Stewguy said in Looking for help with QPrinter: QPrinter Hi, first of, did you rerun qmake / delete your build folder for a clean rebuild after adding QT += printsupport ? this is the most common error people have second, for what platform do you try to use the modul for, IIRC I tried to use it to create a pdf on mobile devices and I had to fall back to QPdfWriter @Stewguy After you have acted on @J-Hilk 's comments, your code does not in itself show where you actually attempt to use QPrinter? - Asperamanca last edited by You could try out one of the examples that uses QPrinter, e.g. the Font Sampler. If you have trouble there as well, it's probably a system or Qt configuration issue. If not, it's probably something in your project. @Asperamanca said in Looking for help with QPrinter: You could try out one of the examples that uses QPrinter, e.g. the Font Sampler. If you have trouble there as well, it's probably a system or Qt configuration issue. If not, it's probably something in your project. Thanks for pointing out the example, it ended up leading me to the solution. What resolved it: The QT template added the following lines into my .pro file INCLUDEPATH += $$PWD/../../../../Qt/5.11.0/winrt_x86_msvc2017/include DEPENDPATH += $$PWD/../../../../Qt/5.11.0/winrt_x86_msvc2017/include Something about those was causing QPrinter not to load. Once I removed those lines the project was able to Build and Run. Thanks for the help everyone!
https://forum.qt.io/topic/92054/looking-for-help-with-qprinter
CC-MAIN-2020-40
refinedweb
360
73.58
I have the main project.cpp I wrote every class and object in it set and get functions. I made a simple 3 text boxes with 1 button. its like a cin and cout but in windows form this is a basic missing code #include <iostream> #include <cstring> using namespace std; class system { private: int x; int y; int z; //answer public: void setaddition(int a. int B)/>; int getaddition(); void print(); } void setaddition(int a, int B)/> { a=x; b=y; } int getaddition(); { cin>>x; cin>>y; z=x+y; } void print() { cout<<z; //print in textbox3 } What I want is that textbox1 = a; textbox2 = b; basicly a & b are cin and button & textbox3 are cout. I want the result in textbox3. remember I want the button to go to the getaddition to understand what it wants. I also want the textbox to be the cin for the values.
http://www.dreamincode.net/forums/topic/280850-basic-button/page__pid__1631680__st__0
CC-MAIN-2013-20
refinedweb
150
79.09
1 /*2 * $Id: VoidURLConnection.soap.axis.transport;12 13 import java.net.URL ;14 15 /**16 * A fake url connection used to bypass Axis's use of the URLStreamHandler to mask17 * uris as Urls. This was also necessary because of the uncessary use of static18 * blocking in the axis URLStreamHandler objects.19 * 20 * @author <a HREF="mailto:ross.mason@symphonysoft.com">Ross Mason</a>21 * @version $Revision: 3798 $22 */23 public class VoidURLConnection extends java.net.URLConnection 24 {25 public VoidURLConnection(URL url)26 {27 super(url);28 }29 30 public void connect()31 {32 // nothing to do33 }34 35 }36 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/mule/providers/soap/axis/transport/VoidURLConnection.java.htm
CC-MAIN-2017-17
refinedweb
116
51.24
As a fun exercise, what is the best way to maximize the unix CPU load average without actually consuming lots of resources (CPU cycles or question is covered on Stackoverflow: Artificially modify server load in Ubuntu Last week I wanted to create backup from my HDD I mount it to my old linux machine after that I run tar cvjf command to create and compressed my hdd! My system has celeron CPU and I think it's older that 6 years. It's done after 48Hours and CPU works 99% but my memory didn't use very much maybe 50Mb. tar cvjf Now there are many solution for your question. Another one is to calculate Factorial of one number!!! (Big Number) Edit: I write a simple C program like below: #include <stdio.h> void main() { for ( int i = 0; i < 10000000000000; i++); } Then I compiled it with GCC command and opened 5 terminals and executed them! below photo shows my load average. It's 5!!!!! Notice: My program depends on your CPU. I test it with Pentium M ! GCC asked 1 year ago viewed 62 times active
http://superuser.com/questions/596972/maximize-cpu-load-average-without-consuming-resources
CC-MAIN-2015-18
refinedweb
187
71.65
Click a heading below to reveal the tips. See help on customising and using reg files. See Icon Finder to view standard Windows icons and find their Icon Index. REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\explorer\Shell Icons] "Number from list below"="Path to icon, Icon index" The list below specifies the number for the default shell icons. The path to icon is a dos path to the icon, in a registry file remembe to use \\ (regedit rmoves one \ when merging). The icon index is used when the icons are coming from a binary file like a .dll or .exe. It is the position of the icon in the file (a negitive number is the inverse of the resource ID). Omit the comma and Icon Index when using an .ICO file. To get the index create a shortcut, right click and choose Properties, then Change Icon, then Browse. Select the file and click open. The icons are numbered from 0 downwards then across. Icons can also be set under the Default Icon setting for the object in HKEY_ROOT (which is where Windows stores its' icon information), the Shell Icon setting overrides this. Download a zip file called OpenFolder.zip containing a green open icon and registry file that makes the current open folder in Explorer green. Collapse section You can also set custom icons for individual drives. This reg file is for drive C, the drive is the second last part of the key without a colon. REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\explorer\DriveIcons\C\DefaultIcon] "@"="Path to icon, Icon index" Collapse section You can also set custom icons for individual folders. Copy the following lines into a file called Desktop.ini and save it in the folder you want to customise. This file may exist in the folder in which case edit the file and add or edit the following lines. This as written sets the icon to a tree, see Shell Icons tip on this page for how to find an icon index. If designing your own icons you can use a bitmap file (bmp). A good size is 16 x 16 for small icons and 32 x 32 for large. When using bitmaps as icons the colour of the upper left pixel becomes the invisible colour. If the icon and desktop.ini file are in the same directory there is no need for a path. [.ShellClassInfo] IconFile=C:\WINDOWS\SYSTEM\shell32.dll IconIndex=41 InfoTip=An optional description for the folder that displays in a pop up tip and web view when selected This file usually has its' hidden attribute set, though it doesn't have to. Right click it and choose Properties then check the Hidden checkbox. The folder it's stored in also needs to have its' system or read only attribute set. Type in a MS-Dos command prompt or Start - Run dialog box one of the following, Attrib +s Foldername Attrib +r Foldername or right click and choose properties to set the read only attribute. Remember to enclose filenames in quotation marks if it contains a space. There is a tip about setting the system attribute on the context menu page, especially if you're going to be doing a fair bit of this. Download eight Custom Icons (customicons.zip) suitable for custom folder icons. Collapse section Dlls (Dynamic Link Libraries), OCXs (Active X controls), and CPLs (Control Panel Applets) are all DLLs. Nearly all CPLs and OCXs have an icon builtin while a fair few DLL do too.This sets these files to use their builtin icons rather than the default generic icon they use. DLLs without a builtin icon will use unknown file type. Copy the following line into a new Text Document and call it anything.reg. Double click it. To change back to the default use a semicolan to comment out the @="%1" and remove semicolans from @="C:\\WINDOWS\\SYSTEM\\shell32.dll,-154". REGEDIT4 ;Use a semicolan to comment out the @="%1" and remove semicolans from @="C:\\WINDOWS\\SYSTEM\\shell32.dll,-154" to undo changes ;Sets Control Panel Applets to display their own icons [HKEY_CLASSES_ROOT\cplfile\DefaultIcon] ;@="C:\\WINDOWS\\SYSTEM\\shell32.dll,-154" @="%1" ;Sets Dynamic Link Libraries to display their own icons [HKEY_CLASSES_ROOT\dllfile\DefaultIcon] ;@="C:\\WINDOWS\\SYSTEM\\shell32.dll,-154" @="%1" ;Sets Dynamic Link Libraries (OCX) to display their own icons [HKEY_CLASSES_ROOT\ocxfile\DefaultIcon] ;@="C:\\WINDOWS\\SYSTEM\\shell32.dll,-154" @="%1" Collapse section Windows treats bitmaps (bmp or Bitmap Image files) as icons. Therefore with a simple registry edit bitmaps will show a picture of themselves as a icon. SetBitmapIcons.vbs tests that Bitmap Image (bmp) files are associated with Picture.Paint because a lot of bitmap editors take the extension and set it to themselves (which is the incorrect way of doing it). If it's not associated with Picture.Paint then it offers to reassociate Bitmap Images to Picture.Paint. The icons cannot be changed if it's not associated with Picture.Paint. You are only prompted to change the association to Picture.Paint if it's not already associated. It then tests if the icon is already set to display its' own picture. If it is, it offers to restore the Window's default which is Icon 1 in MSPaint, if not it offers to set it to a picture of the bitmap. The icon cache file is then deleted and it advises that a restart of Windows is necessary. See the note in the file if Windows is not installed in its' default directories or if directories short file name has been changed. 'SetBitmapIcons.vbs 'Displays and/or changes the default association for bitmaps and can change them to use a picture of themselves as the icon. ' 'Serenity Macros 'David Candy davidc@sia.net.au ' '----------------------------------------------------- 'N O T E * * * N O T E * * * N O T E 'Edit strPbrush with the path to MSPaint (in case it's different from the default) '---------------------------------------------------- ' On Error Resume Next strPbrush="C:\PROGRA~1\ACCESS~1\MSPAINT.EXE,1" strTitle="Set Bitmap Icon" strPbrush="C:\PROGRA~1\ACCESS~1\MSPAINT.EXE,1" Dim Sh Set Sh = WScript.CreateObject("WScript.Shell") ReportErrors "Creating Shell" Msgbox "Set Bitmap Icons checks the file association for bitmaps, can restore them to the Windows default if they have been changed, and can set the icon to be a picture of the bitmap or set it back to the Windows default icon." & vbCRLF & vbCRLF & "You will be prompted for each action.", vbInformation + vbOKOnly, strTitle If Sh.RegRead("HKCR\.bmp\") ="Paint.Picture" then SetIcon Else If MsgBox("Bitmaps are associated with " & Sh.RegRead("HKCR\.bmp\") & vbcrlf & vbcrlf & "Would you llike to associate bitmaps back to Paint?",vbQuestion + vbYesNo + vbDefaultButton2, strTitle) =6 then Sh.RegWrite "HKCR\.bmp\", "Paint.Picture" SetIcon Else Msgbox "Bitmap associations not changed. Cannot set icon for bitmaps unless their associated with Paint.Picture (usually Paint).", vbInformation + vbOKOnly, strTitle End If End If ReportErrors "Main" VisitSerenity '--------------------------------------------------------- Sub SetIcon On Error Resume Next If Sh.RegRead("HKCR\Paint.Picture\DefaultIcon\") <>"%1" then If MsgBox("Icons are set to show as Windows default icon." & vbCRLF & vbCRLF & "This will set the default icon for bitmaps to a picture of the bitmap." & vbCRLF & vbCRLF & "Continue?",vbQuestion + vbYesNo + vbDefaultButton2, strTitle) =6 then Sh.RegWrite "HKCR\Paint.Picture\DefaultIcon\","%1" Msgbox "Bitmap icons should now be a picture of the bitmap." & vbCRLF & vbCRLF & "You'll need to restart Windows", vbInformation + vbOKOnly, strTitle FlushIconCache Else Msgbox "Bitmap icons were not changed", vbInformation + vbOKOnly, strTitle End If Else If MsgBox("Icons are set to show as a picture of the bitmap." & vbCRLF & vbCRLF & "This will set the default icon for bitmaps to Windows default" & vbCRLF & vbCRLF & "If Windows is not installed on the C drive, MSPaint is not installed, or Program Files\Accessories folder has a non standard short file name then choose No and edit this file with the correct path name." & vbCRLF & vbCRLF & "Continue?",vbQuestion + vbYesNo + vbDefaultButton2, strTitle) =6 then Sh.RegWrite "HKCR\Paint.Picture\DefaultIcon\",strPbrush Msgbox "Bitmap icons should now be Windows default icon. " & vbCRLF & vbCRLF & "You'll need to restart Windows", vbInformation + vbOKOnly, strTitle FlushIconCache Else Msgbox "Bitmap icons were not changed", vbInformation + vbOKOnly, strTitle End If End If ReportErrors "SetIcon" End Sub '--------------------------------------------------------- Sub FlushIconCache On Error Resume Next Dim fso Set fso = CreateObject("Scripting.FileSystemObject") FSO.DeleteFile sh.ExpandEnvironmentStrings("%windir%") & "\ShellIconCache", true If err.number=53 then err.clear ReportErrors "FlushIconCache" End Sub Sub ReportErrors(strModuleName) If err.number<>0 then Msgbox "Error occured in " & strModuleName & " module of " & err.number& " - " & err.description & " type" , vbCritical + vbOKOnly, "Something unexpected" Err.clear End Sub Sub VisitSerenity If MsgBox("This program came from the Serenity Macros Web Site" & vbCRLF & vbCRLF & "Would you like to visit Serenity's Web Site now?", vbQuestion + vbYesNo + vbDefaultButton2, "Visit Serenity Macros") =6 Then sh.Run "http:\\\biz\serenitymacros" End If End Sub Collapse section The setting for icons are stored here in the registry. There are other icon settings, but they can be changed with the Display - Appearance and Web Tabs in Control Panel and the Folder Options on the View menu in Explorer. The Explorer Options section on this page control the title wrapping The first is in pixels, and the second is in bits per pixel REGEDIT4 [HKEY_CURRENT_USER\Control Panel\Desktop\WindowMetrics] "Shell Small Icon Size"="16" "Shell Icon Bpp"="16" The size of the icon cache is stored here in icons. This is the default. Increase this if Windows often shows the wrong icon or no icons. REGEDIT4 [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer] "Max Cached Icons"="4000" These are the settings set with Control Panel. Choosing Use Large Icons on the Effects page of the Display Control Panel applet sets Shell Icon Size to 48. REGEDIT4 [HKEY_CURRENT_USER\Control Panel\Desktop\WindowMetrics] "IconSpacingFactor"="100" "IconFont"=hex:06,00,00,00,00,00,00,00,90,01,00,00,00,00,00,00,00,00,4d,53,20,\ 53,61,6e,73,20,53,65,72,69,66,00,c7,2d,89,00,08,00,00,00,00,00,00,00,90,01,\ 00,00,00,00 "IconSpacing"="-1125" "IconVerticalSpacing"="-1125" "Shell Icon Size"="32" This is the setting set with Explorer Options REGEDIT4 "IconTitleWrap"="1" There are plenty of freeware icon editors about. Imagedit and Aniedit (animated cursors) are available in both the Windows 95 and 98 resource kits. Collapse section To refresh the icons in the system change the size of icons in Start - Control Panel - Display - Appearance, click apply then change the size back. Collapse section In Windows the Namespace paradigm replaces the drive/directory paradigm of MS-Dos and Windows 3.1. While the file system is still an important part of the namespace it's detined to be less so in the future. The namespace refers to objects in the Windows' shell and their relationship to each other. The top object in the name space is the Desktop, with My Computer, Recycle Bin, and My Documents as the next level. Some objects in the namespace are files or folders and some are not. Control Panel is not a filesystem folder but contains a type of objects stored is the system directory. The namespace defines ways of viewing and manipulating objects. When explorer shows a view of a folder it determines if it's part of the file system or if it has a namespace extension attached. If it has a namespace extension then explorer and the extensions program code work out how to display it. When viewing an extension to the namespace the underlying files (if any) are not being shown - but the view that the extension's program shows which may or may not show the actual files. To show this start File Manager (type winfile in the Start - Run dialog box, make sure show Hidden Files is checked in File Manager's View menu - File Types) and view the folder c:\recycled. If the recycled bin is empty you'll see a desktop.ini and a file called info2, if you have deleted files in the Recycled Bin you'll see the files with names like DC0.the original exyension. These are the actual files deleted and their names are stored in the info2 file. Each hard drive has it's own Recycled folder with files deleted from that drive. When the Recycled Bin is viewed in Explorer, Explorer loads the Recycled Bin program code and this code looks in all drives and retrieves all the file names from all the Info2 files and gives Explorer a list of all files deleted on the computer. Explorer itself only knows how to ask for a list of the contents of a folder. If you are on a network browse to another computer's Temporary Internet Files folder (usually C:\Windows\Temporary Internet Files) and you'll see the files in your Temporary Internet Files. This is because Explorers loads the Temporary Internet Files program code which is designed to show the contents of your Temporary Internet Files. There are three methods to add an extension to the shells namespace (the shell is only one of many - Outlook has it's own namespace); When a folder that is part of the file system has it's Read Only or System attribute set, explorer looks for a file in that folder called desktop.ini (which may or may not be invisible in Explorer) and sees if it has a line like this one from the desktop.ini in c:\recycled [.ShellClassInfo] CLSID={645FF040-5081-101B-9F08-00AA002F954E} Explore then looks up this number (a CLSID) in the following registry key HKEY_CLASSES_ROOT\CLSID and this key has entries that tell Explorer where to find the program code to ask what is in the folder. The Desktop.ini is documented elsewhere on this page, see Setting Icons for Individual Folders and Desktop.ini Syntax. This topic is incomplete. Will be finished soon Collapse section This topic is incomplete. Will be finished soon [.ShellClassInfo] ConfirmFileOp=0 NoSharing IconFile=C:\WINDOWS\SYSTEM\shell32.dll IconIndex=41 InfoTip=An optional description for the folder that displays in a pop up tip and web view when selected HTMLInfoTipFile= Settings\Comment.htt CLSID={CLSID} CLSID2={CLSID} UICLDID={CLSID} [ExtShellFolderViews] {5984FFE0-28D4-11CF-AE66-08002B2E1262}={5984FFE0-28D4-11CF-AE66-08002B2E1262} {8BEBB290-52D0-11d0-B7F4-00C04FD706EC}={8BEBB290-52D0-11d0-B7F4-00C04FD706EC} {BE098140-A513-11D0-A3A4-00C04FD706EC} [{5984FFE0-28D4-11CF-AE66-08002B2E1262}] PersistMoniker= Settings\Folder.htt PersistMonikerPreview=%WebDir%\folder.bmp [{BE098140-A513-11D0-A3A4-00C04FD706EC}] Attributes=1 IconArea_Image=Folder Settings\Background.bmp IconArea_Text=0x00408000 IconArea_TextBackground=0x000000FF [{8BEBB290-52D0-11d0-B7F4-00C04FD706EC}] MenuName=T&humbnails ToolTipText=T&humbnails HelpText=Displays items using thumbnail view. Attributes=0x60000000 Collapse section
http://www.mvps.org/serenitymacros/icon.html
crawl-001
refinedweb
2,441
55.54
not actually to establish a blogging point where individuals can enrich their learns on facilitating and leveraging .NET-related activities most effectively Holy cow, I wrote a book! Sometimes people think they can switch stacks by just This may seem to work but in fact it doesn't, because there is more to switching stacks than just loading a new value into ESP. ESP On the x86, the exception chain is threaded through the stack, and the exception dispatch code verifies that the exception chain is "sane" before dispatching an exception. If you summarily yank ESP into a location outside the stack the operating system assigned to the thread, then the exception chain will appear to be corrupted, and once the exception dispatch code notices this, it will declare your program to be unrecoverably corrupted. It can't even raise an exception to indicate that this has happened, even if it wanted to, because it doesn't even know where the exception handlers are! There are other parts of the system that rely on the stack pointer remaining inside the correct stack. For example, the code that expands the stack on demand needs to know where the stack is and how big it can get. (And the ia64 architecture has two stack pointers.) If a part of the system needs to do work with those values and it notices that the real stack pointer is "in la-la land", it will start taking drastic measures (typically by terminating the program). If you want to switch stacks, use a fiber. Fibers provide a way to capture the state of a computation, which includes the instruction pointer and the stack. My impression was that Fibers were unofficially deprecated. This, of course, does not imply they are not useful. In my study of SQL Server 2005 < > it would seem that using Fibers in combination with asynchronous I/O might be beneficial, but I would have trouble justifying implementing Fibers when I/O Completion Ports already seem to give very good concurrency. SQL Server does, however, give you the option to enable Fibers during execution. @Tom: Fibers probably will not ever be removed from the OS, but they are usually a lot more trouble than they are worth (unless you're already going to the trouble of munging directly with esp). I can't imagine what awful breakage Raymond had to look at to find this problem... The difficulty with fibers is that some components of the OS store state in the TEB of your thread and if you have code that expects to be able to store unique data in the TEB running in a fiber, two fibers could trample each other's TEB states. Kernel Threads also build up a bit of state that might be relevant to user programs. This might get messed up if you move a fiber from one thread to another. Am I the only one that finds it strange the guys wants to swap stack for a stack over flow exception ? How about finding the infinite recursion and fixing his code rather than fixing the symptom of his bad code ? @Nathan_works: It'd definitely be better if he could fix his code rather than write a crash handler. However, writing a crash handler is either an admission of defeat on that front, or a back-stop debugging aid. Other platforms (and embedded/OS programming) support ways of switching stacks to handle the case of stack overflows. For example, Unix supports such a mechanism, though as you imply, the wisdom of it is perhaps dubious, and it's fraught with all sorts of dangers. And of all the uses of alternate signal stacks that I can think of, crash handlers are the "least bad" use. If Windows' Posix subsystem supports alternate signal stacks even for stack overflows, then you might be able to use a similar mechanism, but I believe it isn't supported. This article: says that sigaltstack isn't supported by Services for Unix, and has no equivalent. Anyway, it'd be better if people didn't have to or want to write crash handlers, but switching stacks is pretty much required if you want a crash handler to work in the case of a stack overflow. I understand that some people want to show the message when their application crashes because of the stack overflow instead of leaving the user miffled with the application window suddenly disappearing without a trace. Raymond already explained that the OS cannot do that for you because the application has lost the stack and the OS would want to run the error message in the context of your application. If OS had its own thread and stack for displaying application errors which worked for _any_ application error (stack overflow included), then nobody would have to even think about writing such hacks. Could this code work? #include <stdio.h> #include <windows.h> void no_overflow(void) { int dummy[1024]; dummy[1023] = 0; } void overflow(void) int dummy[262144]; dummy[262143] = 0; long __stdcall Filter(struct _EXCEPTION_POINTERS *ep) DWORD Code; PVOID Address; DWORD *esp; int i; Code = ep->ExceptionRecord->ExceptionCode; Address = ep->ExceptionRecord->ExceptionAddress; esp = (DWORD*)ep->ContextRecord->Esp; if (Code == STATUS_STACK_OVERFLOW) { printf("Stack overflow at address 0x%p\n", Address); printf("Called from %p\n", esp[1] - 5); printf("Called from %p\n", esp[3] - 5); printf("Terminating execution"); return EXCEPTION_EXECUTE_HANDLER; } else { return EXCEPTION_CONTINUE_SEARCH; } int main(int argc, char *argv[]) SetUnhandledExceptionFilter(Filter); no_overflow(); overflow(); return 0; It is just a quick hack and it works for this simple test case, not sure if it would work in debug build, nor in a complex application, and it most definitely wouldn't work without changes in 64-bit mode. It works if alloca causes stack overflow. Doesn't work for stray pointers hitting stack above the guard page. MSDN says: "The exception handler specified by lpTopLevelExceptionFilter is executed in the context of the thread that caused the fault. This can affect the exception handler's ability to recover from certain exceptions, such as an invalid stack." This is just the kernel of an idea... if you're expecting a stack overflow, how about converting the thread to a fiber to begin with, then copying the EXCEPTION_POINTERS data to a pre-allocated location, and switch fibers in the exception handler? I haven't really thought this through yet, but it could be worth considering. I have experimented a bit with this and here is what I have found -- catching stack overflow generated by alloca() is not a problem. Sure Raymond has a point about esp[1]-5, and re-entering printf() (I never said it was good aynway) but that is not the real problem. The problem is catching the stack overflow caused by accessing the stack beyond the guard page by a stray pointer. Since the exception handler is executing in the context of a thread that caused the exception the process gets terminated and you don't even get the chance to catch the exception. If I understand all of this correctly, the only way to catch the _real_ stack overflow would be to launch a process that monitors the thread in some way and make that process dump the context before the thread gets terminated. Obviously, that is exactly what any debugger can do. I would be very interested to hear of some more lightweight approach to this problem. The more I think of the whole concept of exception handling, the more it seems to me like an arbitrary and rather poor design. It may be inherited from the underlying x86 architecture, but I still believe that the whole winding/unwinding thing with pointers to exception structures intermingled with the random bits of data on the stack doesn't even remotely rhyme with the word "reliable". I remember Motorola 68000 CPU where you could directly modify CPU exception handler pointers (those were called traps) so if your program did something bad you were sure to catch it. In my Amiga 500 I had 68010 which differed from 68000 by the fact that it didn't allow unaligned word accesses to memory. Some programs and games refused to work. My workaround was to write exception handler which could fit into the boot sector. It was intercepting alignment fault and emulating access to the odd addresses by using byte instead of word access. If only handling stack overflow on x86 would have been that easy. By the way, Motorola also had two stack pointers -- supervisor and user stack pointer both using the same register A7. Anyone interested in some advanced application debugging should see this:
http://blogs.msdn.com/oldnewthing/archive/2008/02/15/7703995.aspx
crawl-002
refinedweb
1,441
57
CosmosDb and Client Performance Introduction. - Partition Key: Guid (this is to achieve a reasonable spread across all physical partitions). - Client instances. Once a client is created, it is re-used. A client instance is not created per request/operation. - .OpenAsync() is called on each client prior to any operations being performed. This ensured that the route table was populated prior to any operations and did not affect initial/first operation performance, and thus skew the performance figures slightly. The scenario For this scenario, we are going to simulate and measure inserts per second. NoSQL databases are (broadly speaking) better at reads than writes but our scenario called for a high degree of writes. We wanted to know whether both our application and CosmosDb could manage the throughput required, but also what settings provided the best throughput since their seems to be a lot of options to tweak. Caveats Usual caveats apply here. This is really a short experiment designed to provide enough information to go on, but is not definitive. This is not quite scientific, but indicative enough to show how different configurations can affect performance when using CosmosDb. The test code is my own (and pretty ugly), and this was tested on my own personal Azure subscription so did not go too high in consumption rates and costs. Environment setup - CosmosDb collection: Started initial tests at 10,000 RU's (5 partitions with 2,000 RU's per partition), later changed to 20,000 RU's (4,000 RU's per partition) - The client code used for the test is located here: - Note use of parallel task library to achieve concurrency within the client. - Client code aims to insert 1,000,000 records as quickly as possible. - Client environments used - Azure App Service, 2 core instance, 3.5 Gb memory (B2 instance) - Azure App Service, 4 core instance, 7 Gb memory (B3 instance) - Azure Virtual Machine, Basic, A4, 8 Core, 14Gb memory, HDD - Azure Virtual Machine - Standard, D8s V3, 8 Core, 32Gb memory, SSD - Document that will be inserted is relatively simple and small: { "KeyNumber": 1, "SomeText": "blah blah", "SomeCount": 100, "PartitionKey": "09df03d7-ceb6-482d-a979-3db0b3dc2398" } and is represented by this class public class PerfTestDto { public int KeyNumber { get; set; } public string SomeText { get; set; } public int SomeCount { get; set; } public string PartitionKey => Guid.NewGuid().ToString(); } Note that the PartitionKey is a Guid. Test results Listed here are the individual tests and the results of those tests. It is quite lengthy so if you are only interested in the outcome or findings, you may want to skip this section and head straight to the findings at the bottom. Note that only changes to each environment and tests are listed in the test results to reduce excessive text. So if the previous test lists a connection mode of direct and the next test doesn't change that setting, it will not be listed with the test details. In addition, metric snapshots will be listed where I believe it may be "interesting" and not on all tests. Finally, I have included some commentary on each test to try and highlight what I saw happening. Test 1 - Environment: Azure App Service, 2 core instance, 3.5 Gb memory (B2 instance) - CosmosDb: 10,000 RU/s, 5 partitions, 2,000 RU/s per partition - Indexing mode: Consistent - Connection Mode: Standard - HTTPS - Client: 10 concurrent tasks, inserting 100,000 records each. - Time elapsed: 0:27:40.277 - Throughput: 602 inserts per second. - Comments: This test is going to act as my baseline. Test 2 - Connection Mode: Direct gateway mode, connection TCP - Time elapsed: 0:17:57.701 - Throughput: 927 inserts per second. - Comments: Pretty big improvement just by changing the connection mode. Not surprising though. Test 3 - Client: 1 task inserting 1,000,000 records sequentially - Time taken: 42 minutes before being manually cancelled. - Comments: Probably should have done this first but it wasn't a realistic scenario for me, however I did want to see the effect of no parallelism. As you can see, it took a long time and I got bored. Conclusion, performance sucked. Test 4 - Client: 20 tasks, inserting 50,000 records each. - Time taken: 0:14:38.209 - Throughput: 1,138 inserts per second. - Comments: Increasing the number of concurrent tasks helped here (from 10 to 20, inserting less records each). Test 5 - Client: 40 tasks inserting 25,000 records per task - Time taken: 0:14:50.564 - Throughput: 1,122 inserts per second. - Approximately same portal metric profile as Test #4 - Comments: No real improvement so we are clearly hitting a bottleneck or peak for this configuration. Test 6 - Client: 20 tasks inserting 50,000 records per task - MaxConnections: 1000 (previously left at default). - Time taken: 0:14:39.581 - Approximately same portal metric profile as Test #5 and #4 - Comments: No real discernible change here, or at least none of note by changing MaxConnections. Test 7 - Indexing mode: Lazy - Time taken: 0:14:31.130 - Throughput: 1,147 inserts per second. - Comments: I had thought relaxing the indexing mode may help. It did marginally but not by much and would require further experimentation on this setting alone. Test 8 - Consistency mode: Eventual (default is Session) - Time taken: 0:14:44.665 - Throughput: 1130 inserts per second - Comments: Not much change here, a little less but nothing of note. Test 9 - CosmosDb: Increased to 20,000 RU/s, 5 partitions, 4,000 RU/s per partition - Environment: Azure App Service, 4 core instance, 7 Gb memory (B3 instance) - Time taken: 0:9:13.807 - Throughput: 1805 inserts per second - Comments: I increased the throughput/RU per second in anticipation of more throughput. As expected, increasing the Azure app service instance to a 4 core machine helped to increase throughput. Test 10 - Client: 40 tasks inserting 25,000 records per instance - Time taken: 0:9:47.259 - Throughput: 1703 inserts per second - Comments: Trying to play with an optimal parallelism setting by increasing to 40 tasks (from 20) with less to insert. Bit of a drop so probably hitting the next peak/ceiling. At this point, it is looking like more cores and processing ability are the key. Test 11 - Environment: Virtual Machine, Basic, A4, 8 Core, 14Gb memory, HDD - Client: 20 tasks, inserting 50,000 records per task - Time taken: 0:8:20.205 - Throughput: 1998 inserts per second - Comments: Decided on using an actual virtual machine where I can really play with core/processing power settings. As somewhat expected, increasing to an 8 core VM (albeit basic) helped to improve throughput. Test 12 - Client: 40 tasks inserting 25,000 records per task - Time taken: 0:8:13.253 - Throughput: 2027 inserts per second - Comments: Increasing parallelism with the added processing power and cores seemed to increase our peak/ceiling so throughput increased again. Test 13 - Client: 100 tasks, all inserting 10,000 records per instance - Time taken: 0:8:41:34 - Throughput: 1919 inserts per second. - Comments: Increasing number of tasks by a large margin here. As you can see, a drop in throughout so hitting another peak. Test 14 - Client: 50 tasks, all inserting 20,000 records per instance - Time taken: 0:8:5.46 - Throughput: 2060 inserts per second - Comments: Dropping to 50 tasks instead of 100, but more than the 40 of Test #12 increased throughout again to our highest thus far. For this VM configuration, this is looking like the sweet spot. Test 15 - Client: 60 tasks inserting 16,667 records per task - Time taken: 0:8:11.968 - Throughput: 2031 inserts per second - Comments: Increasing parallelism a little actually drops throughput, confirming previous thoughts that 50 tasks (as in Test #14) is the sweet spot for this configuration. Test 16 - Environment: Virtual Machine, Standard, D8s V3, 8 Core, 32Gb memory, SSD, Note: Enabled low latency networking on creation - Client: 60 tasks inserting 16,667 records per task (same as prior) - Time taken: 0:3:38.224 - Throughput: 4245 inserts per second - Comments: Boom. Increasing the VM to a D8s V3 with low latency networking massively increased throughput for the same level of parallelism. Test 17 - Client: 100 tasks inserting 10000 records per task - Time taken: 0:3:35.527 - Throughput: 4256 inserts per second - Comments: Trying to bump up parallelism again yielded some benefits and the highest throughput to date, but is probably a little high. However, this is enough evidence for this short experiment. Findings Based on the not so scientific, nor comprehensive tests (but enough for me), these are the findings: - Connection Mode: This has a significant impact on throughput and Direct Gateway Mode using TCP should be used where possible. - Parallelising write (or read) actions. For a single client, parallelising the operations as much as the environment will allow before hitting a peak has significant impact on throughout. Finding that peak will depend on environment configuration and mostly relates to processing power and multiple cores. - Scaling out operations: This would also achieve the same effect here so instead of multiple cores, you would also use multiple instances. This is where I think CosmosDb really starts to show its performance. - More cores: The more cores, and the more powerful the cores has significant impact on throughout. In combination Low latency networking would also help a lot here. - Indexing mode had minimal effect. It may have helped a bit but the percentage increase in this scenario was negligible. More testing on this specifically to determine full effect is required. - Consistency mode: I had thought this would have had a greater impact but it was only minimal. That is not to say it doesn't, but when balancing performance and consistency, lowering the consistency setting does not have that much performance benefit. Again, more testing against all consistency levels is required to better determine the range this setting has. - MaxConnections: This seemed to have little to no effect. This may be more applicable to Standard gateway mode with HTTPS but going with Direct mode TCP had such a significant improvement, it did not seem to matter. Conclusion More power, scale out/parallelise and connection mode. These are the key take-aways here from a client perspective and getting the most from CosmosDb. I'd like to have time and Azure credits available to perform much more granular testing but I found out what I needed to know. I hope this has been helpful and save you some time. I'd love to know your experience or thoughts around this.
https://weblogs.asp.net/pglavich/cosmosdb-and-client-performance
CC-MAIN-2019-47
refinedweb
1,748
63.29
Im using Es08A servo i connect red to 5v, yellow to 9 and back to gnd and using power source from cable #include <Servo.h> Servo myservo; // create servo object to control a servo void setup() { myservo.attach(9,600,2400); // attaches the servo on pin 9 to the servo object } void loop() { myservo.write(15); // sets the servo position according to the scaled value delay(2000); myservo.write(0); // waits for the servo to get there } Im using this code and seveal other codes but the rsult is same the servo is only moving for 0.0001 second the moment i disconnect and reconnect it to 5v but it does not move fully the problem i think is - the cable does not provide 5v power correctly - servo is dead( its new though) - I should use different value of code for this servo if anyone know the ploblem let me know
https://forum.arduino.cc/t/servo-problem-es-08a/127734
CC-MAIN-2021-43
refinedweb
151
64.75
UnityScript’s long ride off into the sunset It’s been with us since Unity 1.0, but its time is finally coming: we have begun the deprecation process for UnityScript, the JavaScript-like scripting language available as an alternative to C# in Unity today. In this blog post, we’ll go into the details behind the decision, but to briefly summarise: continued support for UnityScript is obstructing our ability to deliver new scripting-related features, and only about 3.6% of projects are using it heavily. Why deprecate it? Every time we remove something from Unity, we assume there are always some users that it’s going to inconvenience. So, it’s important that we have made sure our reasons are worthwhile. There’s a lot happening around scripting at Unity right now. Some of the biggest pieces: - The Scripting Runtime upgrade, which brings the ability to use .NET 4.6 and C# 6. - The JobSystem, making it possible to easily write multithreaded code in a way that protects you against race conditions and deadlocks. - The NativeArray type, allowing you to create and work with large arrays that have their storage controlled by native code, giving you more control over allocation behaviour and exempting you from garbage collection concerns. - Control over the script compilation pipeline, so you can customize how your scripts are combined into assemblies. That’s just a fraction of what we’re working on right now – there are many more things happening, and that doesn’t even include some of the projects we have planned for the future. In addition to the specific scripting projects, we’re also increasingly opening up the engine and growing our API surface – which we want to do using the most appropriate language constructs available. Today, UnityScript and C# are fairly evenly matched in terms of functionality and performance: there’s nothing you can do with C# that you cannot do with UnityScript. C# is the clear winner when it comes to developer ecosystem – not just the millions of C# tutorials and samples out there, but also tooling support, like refactoring and intellisense in Visual Studio – but you could say, today, that UnityScript works, and who needs those fancy tools anyway? It’s true today, but it won’t be true forever. As we upgrade the Scripting Runtime and version of C# we support, there will begin to be things that UnityScript doesn’t do as well as C#, or even can’t do at all. It already does not support default values for method parameters, and there are more language features coming to C#, such as ref return, that will have the same problem. We don’t use these language features in the API today, but we want to, both for performance and to achieve a clean API design. It’s all just software, and we could take the time to implement these missing pieces into UnityScript. Time is not free, though; asking an engineer to work on bringing UnityScript up-to-date means taking them away from working on something else (like one of the new features I mentioned above – or just from fixing bugs). This isn’t even mentioning the time we already invest in maintaining UnityScript – supporting it in the Script Updater, supporting it in the documentation, and so on. So let’s look at the other side of this: how many users will be affected? The Editor periodically sends us data about your project, including the different file types you’re using and how much you’re using them, so from that we can calculate statistics about how many projects are actually using UnityScript. What we found was: -. What this suggests to us is that the majority of you who still have UnityScript code aren’t using it heavily. You may even not be actively using it at all: a .js file in the project might be an example script for an Asset Store package, rather than code that you are actually relying on. Therefore, an early step in our deprecation plan is to start working with Asset Store publishers to get rid of packages that are providing these files – more on this below. To the 3.6% of you who are using it more heavily – and especially the 0.8% who are using it exclusively – we are sorry. We know that this decision sucks for you. We’re taking some steps to try and smooth the transition, that I’ll describe below; and we hope that you will eventually agree with us that it was worth it in the end. How will it happen? We’re not just going to pull the plug overnight. Here’s what you’re going to see us do: Firstly, as of the beginning of June, we have amended the Asset Store submission policy to reject packages that contain UnityScript code. All new code that you’re writing for Asset Store packages should be in C#. (We ran this past the Asset Store Publishers discussion group before we did this, to give them a heads-up). Soon, we will begin a scan of all existing packages on the Asset Store to find ones that contain UnityScript files, and will contact publishers to ask them to port their code to C#. After a while, any package that hasn’t been ported will be removed from the store. Secondly, you might have already noticed: the Unity 2017.2 beta no longer has a ‘Javascript’ (a.k.a UnityScript) option in the Create Assets menu. All we have done at this point is remove the menu item; all the support is still there, and you will still be able to create new UnityScript files outside of Unity (for example, via MonoDevelop). We’re doing this to help ensure that new users do not continue to adopt UnityScript; it would be irresponsible of us to let them invest time in learning it when it is not long for this world. Thirdly, we have begun work on a UnityScript -> C# automatic conversion tool. There are a few of these out there already, but we weren’t happy with the approaches they use; we already learnt a lot about operating on UnityScript code when we wrote the Script Updater, so we decided just to apply that knowledge and build our own solution. We’ve not yet decided whether this will be integrated directly into Unity or just available as a separate open-source tool, but either way we are expecting to have something available by the time 2017.2 ships later this year. We’ll have a follow-up blog post about this tool when it’s ready. After that, we will be watching our Analytics numbers. Our hope is that we’ll see a fairly quick decline in the use of UnityScript – particularly in the “fewer than 10% of scripts” group, who have less code to migrate – but if we don’t, we’ll pause our plans and investigate what’s blocking people from migrating. Sometimes it’s just a matter of timing, but other times there are real issues, and we want to make sure we didn’t miss something before we switch it off entirely. Once we’re content that the usage level is low enough, Unity will no longer ship with the UnityScript compiler, and will no longer recognise .js files as user script code. We’ll also remove the UnityScript examples from the documentation, and remove UnityScript support from the Script Updater. The UnityScript compiler will remain available on Github at in case you need it for anything – we will not be accepting any pull requests to it, but you can fork it and use it for whatever you need. A note about Boo We announced back in 2014 that we were dropping Boo support from the documentation and Editor UI. The Boo compiler itself has stuck around, though, because UnityScript is actually a layer on top of Boo – it uses the Boo runtime libraries, and the UnityScript compiler is written in Boo. This has allowed you to continue using .boo files in your projects, even if there’s nothing in Unity that mentions it any more. The removal of UnityScript support will also mean the final removal of the Boo compiler. At this point, only 0.2% of projects on 5.6 contain any .boo files, and only 0.006% of projects have more than 3 boo files in. Again: we’re sorry, but its time has come. Conclusion We hope this post has explained our reasoning clearly, and given you some reassurance that we are not just doing this without having thought carefully about it. Going forward, this is the kind of process we want to follow for every situation in which we are removing a feature: announce our intentions, push for change through the Asset Store and Editor UI tweaks, but ultimately make the decision based on actual data about what you’re all doing. Deprecating and removing features can feel like the opposite of progress sometimes, but it’s an important part of streamlining Unity; like a forest fire clearing the way for new growth, it helps clear the way for us to deliver the fixes and features you want as quickly as possible. 135 replies on “UnityScript’s long ride off into the sunset” very smart decision. Engineering hours are very expensive nowadays and allocating them to support and maintain a dying horse is a pure waste of time. as a C# developer , I am very excited to see .NET 4.6, C# 6 and hopefully Roslyn being adopted as the ONLY scripting option for Unity3D Leave JavaScript and all its derivatives where they belong –> web development (gross!) THX for all ¡¡ >This has allowed you to continue using .boo files in your projects, even if there’s nothing in Unity that mentions it any more. UnityEngine.Boonamespace? Shouldn’t be a big deal as long as you give some mechanism for those projects which adopted Unity Script to work their way through the pipeline. You might want to also start talking about how you’re going to kill off all the legacy content which has UnityScript in it. Nothing sucks more than having a bunch of dated documentation around telling you how to do something that is no longer supported. That said, would have been nice to have support for Python at some point but it looks like that’s never going to happen now ;) Undoubtedly a good decision. Somehow i doubt you guys have time to spare, and maintaining support for two languages must inevitably come at the cost of new features, bugfixing etc. Hopefully now, in time we will see support for the newer language features, such as task-based async. I myself am fairly new to Unity, and have nothing invested in unityscript. I see some commenters chose Unity for the JavaScript(ish) support. Me, I would never have chosen Unity if that was the language I had to use. So that goes both ways. Keep up the good work. And kudos for the open, professional way you go about it. Here is my experience in Unity: I was the project manager for a cross platform/cross play game for my senior project at Drexel. It’s a great tool! I’m also close to getting my bachelor’s in interactive digital media (web development and design basically). Here’s what I have to say about the removal of UnityScript: UnityScript should have never been a thing. It should have ALWAYS been vanilla javascript. This always leaves Unity open to either hardcore C# programmers, web developers, and front end developers. I feel if Unity went all the way with vanilla javascript support it would have been quite a different world. Unity uses vanilla C#. I just wish the same was always true for javascript. lol Did the entire 3% show up to comment? :D It’s been long overdue. If you read the article, the reasoning is more than worth dropping it. You exist in the world of technology, so having to learn new stuff and change over time is a never ending process. As for those mentioning the documentation, it’s insanely easy to translate logic from UnityScript to C#, and vice versa. Also, C# is very easy to learn, you may even enjoy it. ;) Why Unity will remove UnityScript completely? Why Unity not only freeze the development of UnityScript and keep it as a possibility to transfer simple javascipt logic from web projects to native Unity projects? E.g. I have developed a simple Javascript to UnityScript framework to transfer the basic game logic from my web project. I need only basic UnityScript functionalities, no parallel processing, no new .NET libraries. Hi, just want to say that the latest released version of the tool (link below) does preserve most of the comments. Best Adriano Unfortunately there are stille some doc pages where there is nothing in C# (or less info than JS section) : Example of JS used instead of C# in the doc : Example of less info in C# than JS section (here’s the excepted results, ex : “// Prints 10”): While I certainly feel your pain Unity-folk (maintaining two disparate languages is effectively impossible), you will be losing a growing stream of new developers. As a web developer, the reason I chose Unity is because of the easy mental transition to UnityScript (yes, it’s not quite JS, but close enough). Within a week I had a pretty cool game going. I think Unity is a great tool. Is C# a deal breaker? Well, I would not have chosen Unity in the first place if it was C# only. I might stick with it now that I am far enough up the learning curve, but it certainly puts a sour taste in my mouth now, and you’ve definitely lost a differentiating factor among game engines. Your current base seems happy enough, but I think you may be underestimating the growing popularity of JS (or TypeScript). Here’s just one data point (GitHub commits), but JS is far and away the most popular language there. Just because people use JS in the Web Dev world doesnt mean it should be used here. Not to mention its not real JS anyway as you mention. You also mention the growing use of TypeScript which in my limited knowledge, allows developers to create type safe code. Which surely would be a point in favour of C#? Perfect reply. I started using Unity because I knew javascript from being a webdev, it gave me the initial confidence and familiarity. If I had to begin now, I probably would have gone with UE4 since it uses a framework system (with superior default graphics) instead of raw C/C#/C++. Somehow people have been fooled into cheering when fewer options are available. Remember building out to web browser? The unity player used to work perfectly, now we’re forced to use WebGL which performs at like 2-3fps garbage most of the time, because Flash is the new face of evil! Hey, we could’ve just not run it on mobile, lord knows WebGL won’t either, but nope some smooth talk and everyone has to suffer, it’s thrown in the garbage along with unityscript. Speaking of programming languages, what do you guys think about the D programming language? Not for scripting, but for the core engine? The people complaining about the loss of unityscript clearly don’t understand what is to gain by going to .NET 4.5. Parallel processing and task based threading are going to be massive performance gains. Not every game designer is focused on performance though. UnityScript made it easy for web developers to jump into Unity, especially coming from Flash. We are gaining performance but we should acknowledge that it’s at the expense of user friendliness and ease of use for some of us. I feel like Unity is becoming all about programming and engineering vs fast. messy artistic iteration, and some kind of arms race for great graphics with Unreal, when it’s openness and flexibility was what attracted me to it in the first place. I’m not surprised by the decision but this is a MASSIVE problem for me. I want to convert overy my program now to C# but I have Unityscripts that access C# scripts in the Stand Assets folder. If I convert any of my Unityscripts to C# in the scripts folder then they can no longer access the other unityscripts in the scripts folder as they C# scripts and get compiled first. It kind of means I have to convert my entire project over to C# in one go which is of over 50 scripts and tens of thousands of lines of code. It’s going to cost me a lot of money and time to do this. One thing that would help me is if there was a special folder that was compiled last so I could move over some of the scripts over one at a time. For anyone who has scripts that communicate with each other this is going to be a difficult task if Unity don’t suply ways in which to help with this. Hi Philip > It kind of means I have to convert my entire project over to C# in one go which is of over 50 scripts and tens of thousands of lines of code. It’s going to cost me a lot of money and time to do this. Why don’t you give it a try to the conversion tool? Theoretically it should be able to convert your code requiring minimal changes. Best Adriano Thanks for the reply :) Sorry I didn’t responded sooner. Are you referring to the tool? I’m very much going to give that a go. Fingers crossed that it will help immensely. It’s just a bit of a frightening prospect as I already know my Unityscript isn’t that well written or optimized and shifting it over to C sharp when I can’t remember half of the code and why I’ve done things in certain ways. For example in my early scripts and as a result my largest scripts I’ve removed #pragma strict to make things easier. I’m now aware that this was not a good idea and I need to re introduce it (which throws up over 100 errors) before trying to convert. Another thing I’m aware of is that I’ve useded a declare/find proccess for a lot of object. private var MouseWarning : GameObject; MouseWarning = GameObject.Find(“MouseWarning”); For example which in C Sharp will need reorganizing as the ‘find’ has to accur within a void. So there are quite a few things I can do to improve things before the conversion but I shudder to think of the ones I don’t know about yet. In the long run it’s a good thing but in the short term it does worry me quite a bit. I’m hoping the conversion tools will be a great asset to me for this. Time will tell once I make that dive :) I’ve just realized you’re referring to the conversion tool being developed by Unity. I’m going to a go with that and see where I get :) All the best Phil As a developer, I usually don’t mind any ground breaking change in *new version* Unity. It does not affect the version of Unity I am using, I always have a choice to keep on existing version. I want projects build using older version of Unity can be published to the Apple / Android market. Usually submission requirement change are quite small, such as icon size, splash screen size, xcode project version or setting, etc. If there are update for these changes, I have no need to take the risk of upgrading my projects. This is kind of my approach too. I’m on Unity 5.2 and the only extra steps i had to do were add a script to automatically populate Apple’s camera permissions and i manually drag in the iPad Pro 167px icon each time I build in Xcode. But somethings don’t work with earlier versions. For example I have held off on my AppleTv version because only 5.3 and up support TvOS. But i think for the time being if converting large, legacy projects to C# is too resource intensive we can thankfully at least stick with earlier versions for as long as is possible. it’s funny because a lot of these problems apply only to IOS and Android developers. If i was making PC and Mac games I think it would be a lot easier to upgrade to newer versions of Unity. @Richard Fine – you wrote “After a while, any package that hasn’t been ported will be removed from the store.” Don’t you mean will be transitioned to “deprecated status” so no new sales are possible but customers who “paid” for those assets will continue to have access to them? Why not support Typescript? Well, I’m one of that little .8% totally Unityscript projects. I am happy to switch as I’ve been meaning to for a while and this will finally push me to do that. I think C# is fine the only thing that really bothers me is having to use temp variables. They add several lines of code and are much harder to read. Is there any chance that C# will handle assigning variables like UnityScript? I assume you mean being able to do something like vector3.x = 1; In C# Vectors are structs without public accessors to each number, only a constructor that takes all 3 variables making this impossible. The only way around this would to make it Vector structs a class instead but I remember a Unity3D guy talking about this saying if it was a class projects would crash without the memory overhead! Yes, not being able to write transform.position.x = 1; is really annoying in C#. The lovely thing about Unityscript is that it allowed you to do that and handled it behind the scenes. I’m a designer turned programmer so this is a really nice thing to have when writing clear code. I’m shocked C# can’t do that – I had no idea. I’ve mentioned this before but this really reminds me of Flash moving from ActionScript 2 to ActionScript 3. All of a sudden we had to write three lines of code to open a URL and the devs kept telling us how much better it was:) It was about that time I left Flash for Unity ironically. When they came for the boo and removed it, I was silent – I did not write on boo. When they came for javascript and removed it, I was silent – I did not write on java. When they came for C # I left with unity. When they came for C# I started typing with my knees Lol:) Funny – but you know you may be showing great foresight in the long run. Ive used JavaScript for years. UnityScript was there from unity 1, and syntactically similar enough to JavaScript that its made the transition a breeze for me, and I never thought it would disappear. that’s why i relied on it. I’m 45 so C# isn’t old enough to be seen as an old-time language for my generation. But as I’ve been told many times now on the forums, I should know that technologies move on and accept it gracefully. Your statement is just a reminder that in 2030 maybe the kids will all be using some kind of eye-activated visual language, and C# will go the way of the dodo. It’s never a big deal until it happens to you;) Great post and thanks for the transparency about your decisions. I started with UnityScript as it was easier for me as a non programmer years ago. Now I’m a computer science student who relies on C# and I love it. It would be great to include some official Migration tutorials. I remember myself being totally confused about why can’t I easily use GetComponent(SomeComponent)in C#. This is so exciting! (You can tell I’m a C# developer, haha) I’m sure this will free up a lot of resources for the Unity developers in the long run to focus on important stuff! It must stink for people who rely heavily on JavaScript/UnityScript, but at least Unity is working on those conversion tools to convert JavaScript to C# Does this mean we get proper multithreading and concurrency? This is a very exciting news! It shoud have been done years ago but, better now than never! Not only the Dev team will have more time to work on new tools and bug fixing, but the whole community will have access to more powerful features in the future. I know it’s hard to accept something new, something that’s changing your daily basis, but thisis IT and this update it’s a step foward and I hope the majority of the community will understand this with time. Like many I feel like this should have been done years ago. It seems that Unity have held back updating C# and .Net support in unity so that C# does not gain an advantage over Unity Script. What really surprises me is the people saying they “hate” C# and that they might leave Unity due to this. Not only is C# more powerful once we get C# 6 and a higher .Net support, the syntax for Unity Script and C# are probably the closest you can get. I could understand C language programmers not liking a language such as Python due to a fair difference in syntax and functionality, but to like Unity Script and use the word “hate” against C#, this just confuses me. I have personally never any time even looking at Unity Script, but a colleague of mine needed help with something once in Unity Script and I was able to understand the code and fix bugs in it having pretty much never looked at it before, that is how similar Unity’s C# and Unity Script is. When I started using unity 8 years ago, it was for .js support which was close to .java, a language I learned and thought I loved. Fact is after a year of trying to make a game, .js had to go. The lack of strong typing at the time, and the implicit variable creation were killing me. I am glad it is finally FINALLY being phased out, this language should have been removed years ago and its team moved to some more exciting projects inside Unity. Java is far more closer to C# than to JavaScript or UnityScript. UnityScript user here (making projects in UnityScript since Unity 3), I have to say I totally understand and support the decision, however as others have noted, the transition needs to be as smooth as possible. This is going to be difficult for all of us who are used to JS. All of my old projects and my current project use like 90% JS. That said, I’m lucky to be fluent in C# as well, but there are others who are not as lucky. The arguments of efficiency vs accessibility have already been established, but there’s another element which I wanted to bring up: time. I find that I can write code faster in UnityScript than in C#, there is less to write and it’s more straightforward, allowing for less time pondering at the keyboard and more time writing code. This is a very small inconvenience (so don’t let it hinder your decision) but it’s something that we have to consider. My biggest problem with this switch is that I have A LOT of old code from other projects that are extremely useful in every new project I start. So where I used to be able to just drag and drop/copy old scripts, now I would have to convert them to C#. Once again, not a huge issue seeing as there are Converter tools out there, but it’s another inconvenience that adds up. As stated already, if this is what Unity needs to do to keep up to date, then I’m all for it. However, I do think that the 3.6% figure is too high, considering how many people use Unity. This is similar to the x64 bit Editor issues. Forcing the number down in a set time might not be a wise decision, rather we should encourage conversion and increase awareness until the number declines on its own. Not worried, because the Unity Team seems to understand this. Sadly, it would eat up too much development time to convert my current project, but from now on with new projects I will use C# to help drop the percentage. Lastly, by dropping UnityScript Documentation examples, you mean from the current/new Docs? I would hope the old Docs for 5.6/beyond would still include the JS examples. Having gone through my own .js to .cs transition a few years back I read this article with great interest. For me, it was really important to keep my project both ‘alive’ and in a format I could continue to build upon. As such I’m afraid a couple of the Unity converter’s stated limitations would have made it a non starter for me: – Formatting is not preserved …Without these the output would barely have felt like _my_ script any more and certainly wouldn’t have been code I could have continued working with without spending a ton of time manually re-formatting. So in my case I took the opportunity to write the asset CSharpatron. It was a lot of work, but I successfully used it to convert about 150k lines of Unityscipt with minimal hand-editing. Since then it has been used by quite a few people with (mostly) a very high degree of success. Various improvements were rolled in over subsequent releases. I first read Richard’s comment: ‘we weren’t happy with the approaches they use’ (re: existing converters) and thought ‘fair enough, given their deep insights, experience creating IL2CPP and other conversion tools, they’re probably right’. BUT now that I look at the stated list of limitations for the Unity converter, especially the comments and formatting aspect, I feel justified in at least putting CSharpatron forward as an alternative… CSharpatron _does_ preserve comments and formatting, and it has a damn good go at converting almost everything you might be doing in Unityscript into clean C# code. If you have .js to convert, perhaps you may find it useful :) Thank you for sharing this. I spent last night looking for JS to C# converters on the asset store and I didn’t see this one. It actually answers a lot of the questions i had for the Unity converter that i asked here: and which to their credit Unity promptly and honestly answered. I will definitely back things up and give CSharpatron a go. I have been considering switching bit by bit to C# and i think its just the relative suddenness of this announcement and the fact that its an obseletion of UnityScript and not a deprecation that has given all of this such a sense of urgency. I’m a c# developer, but i think unityscript make web developers ( which is a HUUUGE community ) choose unity when they are choosing an engine ( even though they will probably use c# after learning the engine ) I agree. From what I understand, JavaScript is the most popular language due to it’s prevalence on the web, and that’s why I chose Unity over Unreal. it just felt more accessible. It opened Unity to a huge body of developers and designers. So it’s about time to switch to GODOT Game Engine , they have an easy Script Language and in V3.0 also a Visual Script Editor and ohh without that fu%&ing SplashScreen compulsion Sure go ahead and learn a new engine, and scripting language, just to avoid a quickly growing industry standard language like C#, which now gives you access to the now free Xamarin, .net core 2.0, and all the other cool new stuff Microsoft and other big companies are doing, after .net gone open source. UnityScript may not be JavaScript, but the syntax is close enough that it allowed web designers like me to migrate to Unity en masse. This reminds me of when Flash switched from the messier but more accessible ActionScript 2 to the more performant but verbose ActionScript 3.0. That was a big reason many artists like me dropped it. I completely understand why Unity is doing this, and I know that C# is probably a superior language in many ways. But sometimes you lose accessibility in the race for efficiency. Nimian Legends : BrightRidge is almost 100% coded in UnityScript, and while a lot of my project would seem to be in C#, that’s because I use so many Asset Store C# assets, not because I’m coding homemade scripts with it. I’m going to give the converter a shot, but I want to say this is a serious issue for some people who have legacy projects we have been developing for years. Please don’t dismiss the concerns of UnityScript coders. It is NOT a trivial thing for the average non-coder to learn a new language like C#. For me this is a real blow to my production, and frighteningly makes we wonder how I will be able to continue with Unity in the future. We absolutely understand, and we’re not dismissing the concerns. We still want Unity to be a welcoming place for artists, designers, and other disciplines that aren’t so much with the coding. C# is not the way to do that – but to be honest, neither is UnityScript. The syntax is a bit easier than C# for new users, but not a lot easier. We think that making Unity accessible to artists requires a solution that’s an order of magnitude better, so that’s what we are working on. The timing sucks – ideally we would not be breaking this news until those solutions are ready. I wish I could have written this post as “Artists, we’re replacing UnityScript with an even easier approach for you!” Alas, we’re not there yet, and we didn’t want to wait to break the news about UnityScript so that people understand our plans early. This said, we’re not committed to a particular date for removing it, so maybe by the time we actually do, those other solutions will be further along. As you say, sometimes you lose accessibility in the race for efficiency, but I hope I can reassure you that it will only be a temporary loss of accessibility in this case – we’re absolutely not giving up on providing artists with a way to express behaviour within Unity. I’m all for it and honestly surprised UnityScript was deprecated years ago. Now Unity can focus solely on C#. Personally I hate C#; one reason I embraced Unity was Unityscript, all my projects rely on that. This will probably make me depart from Unity. Sad thing but inevitable…. Transition to subscription model (like Adobe), then killing JS to only support C#…. Time to find another tool, like I’ve already found replacing Adobe’s Phoshop and Illustrator with Affinity Photo and Affinity Designer. The time has come, it’s a well-reasoned decision, and there are plenty of resources to teach people how to code in C#. In fact, it’s a far superior skillset since it will translate to other programming tasks (which was not the case for UnityScript). I know it’s going to anger some folks, but it’s time. Finally. While we’re moving with the times, is there any chance you could stop referring to C# code as ‘scripts’. If I want to write a script, I’ll write in LUA or something. I certainly wouldn’t categorize the structures, frameworks and controls systems I do build for Unity that way. As a C# developer for many years (.NET beta 1 actually) it’s like hearing nails on a chalk board whenever you talk about my code as ‘scripts’. Yes, I know it’s petty….I’ll go back in my box now….. I’m also very dissatisfied with the term script, because the art of programming has been devalued to scripting. The perception of programming among many disciplines in the games industry shifted to “writing scripts in Unity is no real programming, why do we need programmers at all”. That’s not why I chose a career as a software developer. I wish the perceived value of programming is being recovered and I believe Unity can make a start by not using the terms script and scripting anymore. C# is a high level programming language. :D Alas, script is the proper term for how this code is used in Unity… That is, the intermediate output is interpreted by the game engine, as opposed to being executed natively on the CPU. C# is a high-level multipurpose language, and in this case, it is being used as a scripting language. C# is, in fact, designed to do many of the things that scripting languages are famous for leveraging (for example, not having to directly manage memory usage, safer sandboxing, etc). It doesn’t matter if it’s called scripting or programming… what matters is if you’re sitting down to do coding, or software engineering. No, C# is not a scripting language, and writing subclasses of MonoBehavior is not scripting. Scripts are generally interpreted, whereas C# is JIT compiled and actually executes natively. So I agree that the term ‘script’ is somewhat of a misnomer, and probably just stuck in the Unity vocabulary from earlier days. Perhaps with time it will go away, but I honestly can’t work up any great passion about it. I love working with Unity, and what they call my .cs files doesn’t bother me. Just think of “script” as synonym for MonoBehaviour or, in general sense, something “updatable” — the entity whose life cycle is controlled by engine. Of course, the majority of code should be outside such “scripts” — if you’re real software developer rather than regular “game maker”. Time to keep working on that amazing visual scripting hack week prototype from a few years ago that actually wrote c#. It seemed like the perfect blend of artist friendliness, nodey free form code and snapping vertical flow of text. Could be much better than blueprints as that throws out all the benifits of text editors. I think Nottorus plugin can help you It’s real great news! I hope that after it we can make Unity is more awesome and improve supporting of all latests features of C#. Sometimes required the small things that provide that many reducent code with OOP structure, and small js scripts allow to users write quck sequences of actions. But I think that it’s the wrong way to use the js scripts for it, the more correct way – use visual programming, and as I understand Unity moves to it. You make decision based on interested statistics – it’s cool! I translate this post on Russian. Я перевел этот пост на русский язык, если кому будет интересно. Idea: Can you make a simple draft converter? Just for having a fast first draft c# that we can improve. The fact that the numbers were crunched and it really does make sense to move to a more unified (no pun intended) scripting environment, shows that things are progressing for the better. I apologize for a bit of bias as a C# developer, but I am really looking forward to having new language features and syntactic sugar for the more current versions of C#. I imagine that eventually things like co-routines will be eventually going away in favor of the “async… await” pattern, which really does make the code a lot cleaner and readable when dealing with asynchronous tasks. I think this is a wise move –I do feel for those who have to take the plunge and hopefully the proffered conversion tools will lessen the pains of having to transition from scripts of one language to another. In the end, the tools will be leaner and we’ll all be on the same page when it comes to scripting in Unity. Lovely, my wish came true :) Nice, then welcome Cocos Creator, with pure Javascript. ;) Would it make sense to make the “drop Unityscript” version a full point release, where devs who _must_ stick with it can install the C#-only version concurrently so both project types can be supported? All future updates will only occur on the C# version, but devs can still keep developing on the Unityscript version as long as their product requires. So, when we eventually _do_ drop UnityScript, it will be in some major version, not some obscure patch release. Can’t guarantee anything about the major release before that one in terms of how long we’d support it for, but what you’re suggesting is a good idea. While you are at it, could you please remove the deprecated Properties like ? There are so many deprecated functions and properties that have been in for YEARS. Yes please remove those properties as my own catching would like to use those names :) Just use the keyword “new”, which will override the same name of the deprecated variable/property name. private new Rigidbody rigidbody; Yes, using ‘new’ can mitigate this. This is something I am already doing but it has fringe drawbacks. My point being, Unity has a lot of deprecated parts of their API that never get removed. Provided they’ve just ended the 5.x life cycle, that would have been a prime time to remove deprecated Fields/Properties/Methods. It was the 4.x->5.x transition that deprecated these specific Properties anyway, so ample time has certainly passed. I think it’s a reasonable and probably good decision to drop UnityScript, but I don’t believe you can just conclude by the percentages that in a mixed .js/.cs project that the .js is less important or not even used. If you said here’s a mixed project, without looking at it I would guess that the developer is coding in Unityscript and the CS is all third-party code from the Asset Store or wherever. This was certainly the case in my projects for a long time until switched to C# and it is actually still the case in many of my projects, where typically I haved a bunch of C# plugins and the game scripts are still UnityScript. They are able to tell what is your scripts by eliminating the C# scripts commonly used by asset store packages, and apologised to your 0.8% demographic… This. Remove even “.transform”, “.material”, etc, etc….. It has to be ultra-clear and explicit when something does a GetComponent, when something returns a copy of an array, etc… It will be well worth it dangit that was supposed to be a reply to another comment What I find to be super aggravating about it is that the Auto Script Updater already catches these things when moving between the versions. This should have been a “will be straight-up removed next version” thing. The problem is, the Script Updater actually uses the presence of those properties – and the compile failure generated when you try to use them – to trigger the upgrading. So removing those properties isn’t as simple as saying “the Script Updater will catch it,” unfortunately. There’s definitely a point where we should remove them, it’s just not as easy as you might first think. I feel like 2017.1 was a really good opportunity to do so, considering these have been deprecated for the *entirety* of the 5.x life cycle. That’s a fair point, but the numbers don’t look any better if we consider the absolute number of .js files in the project. Of all the projects we received data on between April and July this year, only 2% had more than 20 .js files in, and only 0.2% had more than 100. (By contrast, 37.7% of all projects in the same time window had 20+ .cs files). So this suggests that of the 3.6% of projects who are using UnityScript for 20%+ of their code assets, for a lot of those projects it’s only a handful of code anyway – and so, we hope, not too difficult a prospect to switch to C#. For the 0.2% of projects with larger JS codebases, it sucks more, but this is where things like grabbing the UnityScript compiler from Github and just running it outside of Unity could help. Great… Now I have to learn proper C# and start converting almost 100 scripts. As if I don’t have enough on my plate. Thanks a lot… You don’t have to do anything, don’t update your unity. You should have been doing that years ago. Time to roll with the punches and get with the times. Don’t convert those 100 scripts by hand – try our auto-converter! Dont’ worry, it’s easier than you think :) Great decision! Great move. Unify the Unity community on a single language. It would be a harder decision I think if UnityScript were actually JavaScript and not just a look-alike custom language. But it really is a stretch to call it JavaScript. Kill it and put those resources into giving us all more C# goodness. In the process of chosing the right game engine to use we found Unity3d adverting the engine like an option friendy to non coders. Unityscript was pointed as a powerful, easy, painless and like a futuristic solution. We are a couple of developers doing a humongous game, almost too big for 2 people. We are working hard in this project, I am art director by formation and had lots of problems with coders the last 15 years, so I found unity3d the chance to make our games. We spent lots of money (money we don’t have) in the engine itself, upgrades etc etc we abandoned our careers to make this game happen and now we face the fear of this deprecation. Unity3d was adverted as a simple-codying, artists friendly engine. We had 7 years of work here being simple “deprecated” and we have no good words for this at this point. All we have now is to rush our project and that’s it. Remake the code is virtually impossible. Dinart Filho “Unityscript was pointed as a powerful, easy, painless and like a futuristic solution” coding in UnityScript and C# is almost exactly the same thing. UnityScript being easier than C# is a baseless myth Look into this tool. Maybe the conversion will be relatively painless Hi Dinart > . All we have now is to rush our project and that’s it We are not dropping support on 2017.2 for sure (there’s always the possibility that the final deprecation will happen in an upcoming version, not 2017.3). So, one option is to stick to version 2017.2 for this project and be free to move to newer versions for any new project. You can continue to use UnityScript (or any other language that targets .Net); the point is that it will not be officially supported and you’ll need to compile your scripts manually (I understand this is not optimal and requires effort, but at least it is an option if you need to upgrade Unity and is not able to convert your code to C#) > Remake the code is virtually impossible. The idea is to have your code converted automatically, requiring as little work from devs as possible (of course you’ll still need to know C#, but given the time you said you have been working on the game, I don’t think you’ll have big troubles using C#). If you are willing to, give the conversion tool () a try and feel free to DM me if you hit any issues / questions. Best Adriano As a Web developer who uses JS all the time of necessity, building my Unity games in 100% UnityScript has always made sense. I have shipping products and VERY complex long-term in-progress jobs that will all be affected, all 100% JS! If finishing or maintaining them in the future becomes impractical, that will be a massive blow to a Unity-based developer going back to the Unity 1 days.. This isn’t to say Unity shouldn’t make the transition—I trust the reasons. But please make it as gradual and painless as possible! Lots of people will vote to barge ahead quickly, so you need the votes on the other side! Here’s mine :) Keep JS support for as long as humanly possible. And take the time to make that convertor elegant. C# aside… any hope of Swift in future? I was really hoping my next language would be Swift! We certainly want to remove as much of the pain from the transition as is practical. That includes improving the converter – please do drop by the thread to get the beta and tell us if there’s code it is not handling well, and we’ll improve it. Regarding Swift support – we have no plans to add any new built-in languages to Unity right now. (Someone actually did experiment with Swift support at a recent Hackweek, but it’s not the direction we want to go at the moment). Good decision, I have just been waiting for a good conversion tool to be able to abandon js altogether, as right now it’s not easy to convert a large old project with mixed code. The UI/inspector integration of many js files makes it particularly difficult Sounds like you have some great test cases for the converter – please do try it out and let us know what cases need improving. As a developer who still primarily uses UnityScript, I’ll of course be a little sad to see it go. However, It’s definitely the right decision and I don’t feel it’ll take too long for me to switch over to C# (I’ve worked with it for long enough to get a handle on its syntax, and the conversion tool should save me some time porting my projects.) It was a good decision to include UnityScript back at Unity 1.0, it’s an equally good decision to deprecate it now. There is no need for NativeArray. The latest .NET and Mono 5.x have Span which does just that and has special support in the runtime. I agree. Let’s hope many more features of .NET/C# get embraced and used in the engine. We know it might sound similar, but NativeArray actually does a lot more than Span under the hood. You’ll see when the JobSystem is all released! Well I am glad to see that Unity have finally decided to get rid of UnityScript. This is the right decision. As a front-end web dev trying to step into the game dev world, I chose Unity because I was comfortable with the JS-like syntax of UnityScript. I’m sorry to see it go because it will raise the bar for those making the same step, but I understand the maintenance costs for a second language must be high. Even though I have a few active projects in Unityscript it still feels like it’s a good move. I’d love to see how good a job the compiler does… Bet I can break it :P Oops! Meant converter… Hi Pete. I am sure there’ll be lots of corner cases ;) I am glad you asked; you can find info on how to download it in this thread Best Adriano Thanks Adriano! well reasoned, if you are going to spend the time on more useful features and bug fixes, then this is definitely justifiable. Just don’t forget a lot of code has to load JSON from the internet, and that data won’t show up in the projects file system! The JSON support in the engine is totally separate from all the UnityScript stuff – it’s not affected by any of this. It´s a great thing to have JScript removed from Unity, it´s unsafe and filled with bugs. Maybe you should replace it with Lua language or Python… It would help a lot. FINALLY. So glad. Should’ve been done 2-3 years ago. I was gonna type the exact same thing!! :) Thanks for looking out guys. I know its not the simplest thing to do this kind of transition but it will be worth it for everyone. As far as I know Unity was one of the few engines supporting multiple languages to begin with. J. McKenzie: “Considering how there are more developers using JavaScript than C#, you might want to reconsider this decision.” Not many, because Unity Script != web Javascript. I for one welcome these changes. I know there’s very little difference in developing in either Unity’s Javascript variant or in C# – most of the time we are doing logic and talking to the Unity API :) UnityScript was never JavaScript (as in Node.js / Browser) which led to a lot of confusion among beginning developers and people claiming UnityScript should be saved “because JavaScript is so big elsewhere” which is totally bogus. Yes, please led it die, it will streamline documentation among other things. Awesome! Keep deprecating old stuff that holds the engine back! People will adapt no matter what Good decision *thumbs up* Sounds like an F# opportunity. Why that” Sounds like the exact opposite to me. Let’s not add yet another superficial scripting option that costs precious dev time. Let’s just focus on one single way of doing things Don’t do this. For the love of anything You can already do that in practice, I believe. There’s no official Unity support for it, but as long as you compile it to compatible version, it should be fine. C# and F# are both .NET languages, so it shouldn’t be a problem. I suppose that would also make F# one of the easiest languages to add support for, but whatever. You ought to be able to do it already anyway if you really want to. Almost true. I think it can fail under some circumstances, and there’re some compiler optimizations which might not be there, but I can’t talk about the details. It somehow works. The thing is that there’s a lot of different features, so I think it could fit the bill in the long run. This is a great decision. I work as the community manager of a small software company and I know how valuable developer time is on the RIGHT products. Well done. I’m happy with the decission. In fact, I wrote almost every UnityScript into C# on my own in the past, just to not have mixed scripts in our projects. Focus on the new features and drop old stuff, which is hardly used. I have mixed feelings about this. On one side, i totally understand the technical reasonings and support those. As one of the main ones, of course it is way more streamlined, easier and faster to maintain one language instead of 2-3+. Me personally, i also don’t use unityscript for many many years anymore unless a client explicitly asks for it (and then that makes me feel a bit dirty to use it =) ). There are many very obvious and valid reasons why going with C# as main “pro coder” language makes more sense in Unity right now. But on the other side, we all started coding/creating somewhere, and i know for many that has been a scripting language like javascript, actionscript, some other ecmascript derivate or similar in general. And so while it is not the case for me anymore for many years, especially since i had an oop programming education, i still remember when i started out all those years ago, it was enough work and daunting to learn to do the basic coding things already and back then learning to get going in a basic scripting language without strict data typing etc made it way easier to get into and get basic things going and have that success feeling. It’s also many things one does not even consider as complicated anymore when one is used to it like for example in C# and such languages one can’t populate arrays and other list types etc as freely nilly willy with any type etc, there’s not one list, one object type one can just populate with anything with no restrictions as easily, one should learn the pros and cons of different list types, their methods etc. Now while regarding resource allocation, debugging and other reasons that is a good thing, it makes things way harder to get into for beginners. One could bring up many such examples, like when i first used Unityscript in Unity and then transitioned to C# it at first already threw me off one couldn’t do some things “as inline” in C# as in Unityscript, like for example yield instructions. While OOP, strict data typing, nice encapsulated clean classes, apis etc all make a lot of sense and so of course have their place and it makes sense most more seasoned devs transition to such things at least partially over time, there is also something to be said about what is easier and quicker to get into for beginners or maybe more design/art oriented people than (clean nice) code oriented people. To me Unityscript in Unity was never a great replacement or in tandem usable solution next to/with C# since always too many things did not feel fully supported well in Unityscript and just having to put scripts in different languages in different folders as one has to consider compilation order etc when using several languages together always felt a bit hacky and wonky to me. But i’m not sure dropping it is the better solution compared to making it full futured for those maybe less “seasoned programmer” types so that Unity has a first class easy for beginners/non coders to get into option, too. It also sets a bit the focus on what type of engine one wants it to become more. Does one then just say things like Playmaker etc are what non coders/more design/artside oriented people should use and not code much else themselves or make it all visual workflow driven even more for them or still also offer them an easier in into the “full on ” programming side of things. There are also other aspects to consider like i feel the web does not accelerate as much anymore as back when there were plugins which indirectly pushed unofficial standards evolving faster than when it’s all on the browser makers to adopt and push further official standards faster and nicely, but hey, at the end of the day, maybe they’ll get there and maybe then what some envision as happy thing could still happen at some point, that javascript like web scripting languages also become used more and more for desktop/high end level games/apps and then maybe not the worst thing when one’s engine has a similar language to what such people are already used to using. No. Very good decision. Designing good software is skipping features as much as it is adding features. Great news! Considering how there are more developers using JavaScript than C#, you might want to reconsider this decision. No. While JavaScript is great (I personally use it daily), it doesn’t belong in every facet of development. C# is much more well suited towards game development and as such should recieve all the support and development it can in Unity. What are you talking about? The FIRST sentence says that only 3.6% of the projects currently are mainly relying on UnityScript. And in terms of JavaScript: Unity Script never was JavaScript in the first place. It simply used the syntax. What numbers are you basing your statement on? Did you read the article? Only 3.6% of users have more than 20% of their script files in UnityScript, and less than 0.8% exclusively UnityScript. C# is absolutely the de facto standard in Unity now. That is absolutely not true. Sometimes you get the impression that a lot of people use UnityScript because there’s some kind of urban legend going on in the Unity community saying that “javascript is easier”, which is completely ridiculous. So you often see first-time programmers giving code examples in unityscript in the forums. But in reality, the extremely large majority of actual game developpers are using c# exclusively UnityScript is NOT JavaScript. Syntactically its not even that close. Dude. Stop calling it Javascript. It’s not JavaScript.
https://blogs.unity3d.com/pt/2017/08/11/unityscripts-long-ride-off-into-the-sunset/
CC-MAIN-2020-29
refinedweb
10,305
69.41
Here is the code that is giving me problems. im trying to write code that will find the distence between two points. but i get the cannot find symbol error on lines 27-29(when i start doing math). Code : import java.lang.*; import java.util.Scanner; public class Program { public static void main (String[] args) { double x1, y1, x2, y2; // coordinates of two points double distance; // distance between the points Scanner scan = new Scanner(System.in); // Read in the two points System.out.print ("Enter the coordinates of the first point " + "(put a space between them): "); x1 = scan.nextDouble(); y1 = scan.nextDouble(); System.out.print ("Enter the coordinates of the second point: "); x2 = scan.nextDouble(); y2 = scan.nextDouble(); // Compute the distance double a, b, c, d, e, f; a = x2 + x1; b = y2 + y1; e = Math.java.pow(a,2); f = Math.java.pow(b,2); distance = Math.java.sqrt((e + f)); // Print out the answer System.out.println(distance); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/17773-java-cant-find-symbols-printingthethread.html
CC-MAIN-2014-15
refinedweb
161
70.39
In this article by Sohail Salehi, author of the book, Mastering Symfony, we are going to discuss performance improvement using cache. Caching is a vast subject and needs its own book to be covered properly. However, in our Symfony project, we are interested in two types of caches only: - Application cache - Database cache We will see what caching facilities are provided in Symfony by default and how we can use them. We are going to apply the caching techniques on some methods in our projects and watch the performance improvement. By the end of this article, you will have a firm understanding about the usage of HTTP cache headers in the application layer and caching libraries. (For more resources related to this topic, see here.) Definition of cache Cache is a temporary place that stores contents that can be served faster when they are needed. Considering that we already have a permanent place on disk to store our web contents (templates, codes, and database tables), cache sounds like a duplicate storage. That is exactly what they are. They are duplicates and we need them because, in return for consuming an extra space to store the same data, they provide a very fast response to some requests. So this is a very good trade-off between storage and performance. To give you an example about how good this deal can be, consider the following image. On the left side, we have a usual client/server request/response model and let's say the response latency is two seconds and there are only 100 users who hit the same content per hour: On the right side, however, we have a cache layer that sits between the client and server. What it does basically is receive the same request and pass it to the server. The server sends a response to the cache and, because this response is new to the cache, it will save a copy (duplicate) of the response and then pass it back to the client. The latency is 2 + 0.2 seconds. However, it doesn't add up, does it? The purpose of using cache was to improve the overall performance and reduce the latency. It has already added more delays to the cycle. With this result, how could it possibly be beneficial? The answer is in the following image: Now, with the response being cached, imagine the same request comes through. (We have about 100 requests/hour for the same content, remember?) This time, the cache layer looks into its space, finds the response, and sends it back to the client, without bothering the server. The latency is 0.2 seconds. Of course, these are only imaginary numbers and situations. However, in the simplest form, this is how cache works. It might not be very helpful on a low traffic website; however, when we are dealing with thousands of concurrent users on a high traffic website, then we can appreciate the value of caching. So, according to the previous images, we can define some terminology and use them in this article as we continue. In the first image, when a client asked for that page, it wasn't exited and the cache layer had to store a copy of its contents for the future references. This is called Cache Miss. However, in the second image, we already had a copy of the contents stored in the cache and we benefited from it. This is called Cache Hit. Characteristics of a good cache If you do a quick search, you will find that a good cache is defined as the one which misses only once. In other words, this cache miss happens only if the content has not been requested before. This feature is necessary but it is not sufficient. To clarify the situation a little bit, let's add two more terminology here. A cache can be in one of the following states: fresh (has the same contents as the original response) and stale (has the old response's contents that have now changed on the server). The important question here is for how long should a cache be kept? We have the power to define the freshness of a cache via a setting expiration period. We will see how to do this in the coming sections. However, just because we have this power doesn't mean that we are right about the content's freshness. Consider the situation shown in the following image: If we cache a content for a long time, cache miss won't happen again (which satisfies the preceding definition), but the content might lose its freshness according to the dynamic resources that might change on the server. To give you an example, nobody likes to read the news of three months ago when they open the BBC website. Now, we can modify the definition of a good cache as follows: A cache strategy is considered to be good if cache miss for the same content happens only once, while the cached contents are still fresh. This means that defining the cache expiry time won't be enough and we need another strategy to keep an eye on cache freshness. This happens via a cache validation strategy. When the server sends a response, we can set the validation rules on the basis of what really matters on the server side, and this way, we can keep the contents stored in the cache fresh, as shown in the following image. We will see how to do this in Symfony soon. Caches in a Symfony project In this article, we will focus on two types of caches: The gateway cache (which is called reverse proxy cache as well) and doctrine cache. As you might have guessed, the gateway cache deals with all of the HTTP cache headers. Symfony comes with a very strong gateway cache out of the box. All you need to do is just activate it in your front controller then start defining your cache expiration and validation strategies inside your controllers. That said, it does not mean that you are forced or restrained to use the Symfony cache only. If you prefer other reverse proxy cache libraries (that is, Varnish or Django), you are welcome to use them. The caching configurations in Symfony are transparent such that you don't need to change a single line inside your controllers when you change your caching libraries. Just modify your config.yml file and you will be good to go. However, we all know that caching is not for application layers and views only. Sometimes, we need to cache any database-related contents as well. For our Doctrine ORM, this includes metadata cache, query cache, and result cache. Doctrine comes with its own bundle to handle these types of caches and it uses a wide range of libraries (APC, Memcached, Redis, and so on) to do the job. Again, we don't need to install anything to use this cache bundle. If we have Doctrine installed already, all we need to do is configure something and then all the Doctrine caching power will be at our disposal. Putting these two caching types together, we will have a big picture to cache our Symfony project: As you can see in this image, we might have a problem with the final cached page. Imagine that we have a static page that might change once a week, and in this page, there are some blocks that might change on a daily or even hourly basis, as shown in the following image. The User dashboard in our project is a good example. Thus, if we set the expiration on the gateway cache to one week, we cannot reflect all of those rapid updates in our project and task controllers. To solve this problem, we can leverage from Edge Side Includes (ESI) inside Symfony. Basically, any part of the page that has been defined inside an ESI tag can tell its own cache story to the gateway cache. Thus, we can have multiple cache strategies living side by side inside a single page. With this solution, our big picture will look as follows: Thus, we are going to use the default Symfony and Doctrine caching features for application and model layers and you can also use some popular third-party bundles for more advanced settings. If you completely understand the caching principals, moving to other caching bundles would be like a breeze. Key players in the HTTP cache header Before diving into the Symfony application cache, let's familiarize ourselves with the elements that we need to handle in our cache strategies. To do so, open in your browser and inspect any resource with the 304 response code and ponder on request/response headers inside the Network tab: Among the response elements, there are four cache headers that we are interested in the most: expires and cache-control, which will be used for an expiration model, and etag and last-modified, which will be used for a validation model. Apart from these cache headers, we can have variations of the same cache (compressed/uncompressed) via the Vary header and we can define a cache as private (accessible by a specific user) or public (accessible by everyone). Using the Symfony reverse proxy cache There is no complicated or lengthy procedure required to activate the Symfony's gateway cache. Just open the front controller and uncomment the following lines: // web/app.php <?php //... require_once __DIR__.'/../app/AppKernel.php'; //un comment this line require_once __DIR__.'/../app/AppCache.php'; $kernel = new AppKernel('prod', false); $kernel->loadClassCache(); // and this line $kernel = new AppCache($kernel); // ... ?> Now, the kernel is wrapped around the Application Cache layer, which means that any request coming from the client will pass through this layer first. Set the expiration for the dashboard page Log in to your project and click on the Request/Response section in the debug toolbar. Then, scroll down to Response Headers and check the contents: As you can see, only cache-control is sitting there with some default values among the cache headers that we are interested in. When you don't set any value for Cache-Control, Symfony considers the page contents as private to keep them safe. Now, let's go to the Dashboard controller and add some gateway cache settings to the indexAction() method: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Symfony\Component\HttpFoundation\Response; class DashboardController extends Controller { public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); $response = new Response(); $date = new \DateTime('+2 days'); $response->setExpires($date); return $this->render( 'CoreBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ), $response ); } } You might have noticed that we didn't change the render() method. Instead, we added the response settings as the third parameter of this method. This is a good solution because now we can keep the current template structure and adding new settings won't require any other changes in the code. However, you might wonder what other options do we have? We can save the whole $this->render() method in a variable and assign a response setting to it as follows: // src/AppBundle/Controller/DashboardController.php <?php // ... $res = $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ) ); $res->setExpires($date); return $res; ?> Still looks like a lot of hard work for a simple response header setting. So let me introduce a better option. We can use the @Cache annotation as follows: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Cache; class DashboardController extends Controller { /** * @Cache(expires="next Friday") */ public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); return $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects )); } } Have you noticed that the response object is completely removed from the code? With an annotation, all response headers are sent internally, which helps keep the original code clean. Now that's what I call zero-fee maintenance. Let's check our response headers in Symfony's debug toolbar and see what it looks like: The good thing about the @Cache annotation is that they can be nested. Imagine you have a controller full of actions. You want all of them to have a shared maximum age of half an hour except one that is supposed to be private and should be expired in five minutes. This sounds like a lot of code if you going are to use the response objects directly, but with an annotation, it will be as simple as this: <?php //... /** * @Cache(smaxage="1800", public="true") */ class DashboardController extends Controller { public function firstAction() { //... } public function secondAction() { //... } /** * @Cache(expires="300", public="false") */ public function lastAction() { //... } } The annotation defined before the controller class will apply to every single action, unless we explicitly add a new annotation for an action. Validation strategy In the previous example, we set the expiry period very long. This means that if a new task is assigned to the user, it won't show up in his dashboard because of the wrong caching strategy. To fix this issue, we can validate the cache before using it. There are two ways for validation: - We can check the content's date via the Last-Modified header: In this technique, we certify the freshness of a content via the time it has been modified. In other words, if we keep track of the dates and times of each change on a resource, then we can simply compare that date with cache's date and find out if it is still fresh. - We can use the ETag header as a unique content signature: The other solution is to generate a unique string based on the contents and evaluate the cache's freshness based on its signature. We are going to try both of them in the Dashboard controller and see them in action. Using the right validation header is totally dependent on the current code. In some actions, calculating modified dates is way easier than creating a digital footprint, while in others, going through the date and time function might looks costly. Of course, there are situations where generating both headers are critical. So creating it is totally dependent on the code base and what you are going to achieve. As you can see, we have two entities in the indexAction() method and, considering the current code, generating the ETag header looks practical. So the validation header will look as follows: // src/AppBundle/Controller/DashboardController.php <?php //... class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } The next time a request arrives, the cache layer looks into the ETag value in the controller, compares it with its own ETag, and calls the indexAction() method; only, there is a difference between these two. How to mix expiration and validation strategies Imagine that we want to keep the cache fresh for 10 minutes and simultaneously keep an eye on any changes over user projects or finished tasks. It is obvious that tasks won't finish every 10 minutes and it is far beyond reality to expect changes on project status during this period. So what we can do to make our caching strategy efficient is that we can combine Expiration and Validation together and apply them to the Dashboard Controller as follows: // src/CoreBundle/Controller/DashboardController.php <?php //... /** * @Cache(expires="600") */ class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } Keep in mind that Expiration has a higher priority over Validation. In other words, the cache is fresh for 10 minutes, regardless of the validation status. So when you visit your dashboard for the first time, a new cache plus a 302 response (not modified) is generated automatically and you will hit cache for the next 10 minutes. However, what happens after 10 minutes is a little different. Now, the expiration status is not satisfying; thus, the HTTP flow falls into the validation phase and in case nothing happened to the finished tasks status or the your project status, then a new expiration period is generated and you hit the cache again. However, if there is any change in your tasks or project status, then you will hit the server to get the real response, and a new cache from response's contents, new expiration period, and new ETag are generated and stored in the cache layer for future references. Summary In this article, you learned about the basics of gateway and Doctrine caching. We saw how to set expiration and validation strategies using HTTP headers such as Cache-Control, Expires, Last-Modified, and ETag. You learned how to set public and private access levels for a cache and use an annotation to define cache rules in the controller. Resources for Article: Further resources on this subject: - User Interaction and Email Automation in Symfony 1.3: Part1 [article] - The Symfony Framework – Installation and Configuration [article] - User Interaction and Email Automation in Symfony 1.3: Part2 [article]
https://www.packtpub.com/books/content/caching-symfony
CC-MAIN-2017-13
refinedweb
2,867
59.94
Blogspace vs. NPR 521 jonkl writes "National Public Radio's linking policy at npr.org has caused a fuss within the blog community that's hot and getting hotter. The policy's simply stated in two sentences: 'Linking to or framing of any material on this site without the prior written consent of NPR is prohibited. If you would like to link to NPR from your Web site, please fill out the link permission request form.' This is buried, of course, in a page linked to the site's footer, but somebody noticed and mentioned it to Howard Rheingold, who passed it on to Cory Doctorow of boingboing.net. Cory wrote scathing commentary, calling the policy 'brutally stupid,' even 'fatally stupid.' The outrage is spreading; this has to be a rough day for the NPR ombudsman who's deluged with email by now... ~24 hours after Cory's report." Reminds of the KPMG policy. Web Indexing (Score:4, Insightful) Damn Pirates! Re:Web Indexing (Score:2) So, when does NPR start suing Google, Alltheweb, and others for indexing, and even worse, CACHE-ING their site. As soon as some idiot repeals the DMCA, which grants these sites permission to do these things. Re:Web Indexing (Score:2) Please show me which part of the DMCA talks about this subject. You should do your own research, but here [gpo.gov] Re:Web Indexing (Score:2) Yah, I wonder if they even know that everyone who's bookmarked NPR is in violation of their linking policy. Browsers like Netscape, Mozilla, and even IE save bookmarks as a local HTML file containing links to sites. (Well, in IE's case it's not really a web page but, rather, a specially-interpretted set of directories and files but it's effectively the same as a file.) So eveyone out there on the Web: FREEZE! NPR! Re:Web Indexing (Score:2) I wonder if it's illegal to visit their website without my browser's history disabled. S linking? (Score:5, Funny) Why oh why? (Score:5, Insightful) Tough to think there is something you could refer to as "old fashioned" in regards to the web, but I can't find another way to describe it... Jason Re:Why oh why? (Score:3, Insightful) Maybe we should lobby the search engines (Score:4, Interesting) I think it'd put a stop to things like this rather quickly. Re:Why oh why? (Score:2) Anyway, they should realize that if they don't want people to access their content, they shouldn't be putting it on the fscking World Wide Web. What a shame... (Score:2, Interesting):3, Insightful) Re:Well, part of the reason... (Score:3, Insightful) Re:Well, part of the reason... (Score:4, Informative) Of course, I worked at the central office in DC...I don't know what the funding situation was like for individual stations. Diverse? REALLY??? (Score:3, Interesting) Re:That is sad (Score:3, Insightful) Crappy books can be just as much of a mind numbing time killer as crappy TV can. There is a lot of junk on TV, but there are a number of quality shows as well. Judge the shows by quality, don't merely dismiss them because you're elitist and it's just TV. Re:Govt. should NOT be paying for this (Score:3) You haven't listened to it much, then. During the debate about campaign finance reform, I heard two Republican senators do opinion pieces where they gave their reasons for opposing the legislation. (I was [innapproriately, yes] screaming "Godwin's Law!" at the radio, because one of them equated CFR with Nazism.) I have never heard a Democratic senator give an opinion piece on NPR. In your opinion, is unbiased approximately equal to liberal? I keep seeing this term ("liberal") being used, and it seems to be applied to organizations that I consider relatively unbiased. If they are not unbiased, can you list a media organization that deals with current events who you think is? Re:Why oh why? (Score:2) With all due respect to NPR [sneakyleaker.com], I think their policy is shortsighted and arrogant. However, I will not link to NPR [sassites.com], but to their competition [nbc.com] instead. Re:Why oh why? (Score:3, Funny) Re:Why oh why? (Score:3, Funny) Context... (Score:2) possible reason for the policy. Linking directly to information on their site (or any site for that matter) can put that information in a position to be quoted out of context. Linking can often be used to present evidence for one side of a debate or discussion. If used badly, this habit can misrepresent facts as given, where an overall story might bring a reader to a different conclusion. Re:Context... (Score:2, Insightful) Wrong (Score:2, Informative) There is no law mandating that viewers pay attention to certain content. There is no implicit agreement that viewing certain content also requires watching a commercial message. Fact is, people can ignore advertising. The problem and misunderstanding exists because of the power of the advertising industry. Advertisers have taken for granted they can influence the pysche of the public by advertising, never realizing that, given a choice, people may not watch what they have to offer. I just dare the government to mandate me to watch advertising.... Re:Wrong (Score:2) You mean I don't have to watch the ads? I've been afraid that the Madison Avenue Police were going to kick down the door if I even thought about using the >> button on my VCR! Re:Why oh why? (Score:3, Funny) This could otherwise be summed up as a "failure to understand the environment you operate in" and thus a "flawed business model". Re:Why oh why? (Score:2, Interesting) You do of course realize that these two things are not mutually exclusive. Not-for-profit does no mean no advertising. Not-for-profit only means that the organization is not in the business of making money. Any excess money that a regular company may consider profit is considered surplus by a not-for-profit and must be put back into the business. Take for example PBS (you know - it's where you watch Sesame Street when you aren't watching Jerry Springer). They have several sponsors which is a fancy way of saying advertisers. I have even seen the occasional commercial between shows. Framing vs. deep linking vs. linking (Score:3, Interesting). Have Your Cake and Eat It Too (Score:3, Informative) There is no such thing as 'Deep Linking' (Score:2, Insightful) The legal concept of 'Deep Linking' is flawed, since it assumes you are using some kind of 'special URL'. URL's are pointers. Either you point to the front door or you point to another area, they're still all pointers. For example, You can get to the Starbucks thru the Parking Lot, the Mall or the service entrance. If the service door is open and there's a sign saying Starbucks, people will walk in it. If the door is locked, then people will use the Mall or Lot. If there is a sign saying, 'use the door in the Mall', people will be REDIRECTED to where Starbucks wants them to go.. Re:Links on NPR (Score:2, Interesting) They linked to my site, and it resulted in 16 gigs of overage for the month at $12/gig. I didn't have that, and so my site got shut down for two and a half months. By then, I lost most of my regular visitors and it took a year to get about as many back. Had they asked before linking, I would have said no. It was supposed to be a small, intelligent discussion forum for those of us who choose not to work high-wage jobs. Deep Linking law? (Score:2, Insightful) Google on linking: [google.com] Searched the web for linking suit settle. Results 1 - 10 of about 12,500. Search took 0.15 seconds It seems to me companies keep settling just to prevent the law from ever being decided on by a judge. Deep linking should not be a website's ATM. Stupid (Score:4, Insightful) Why would NPR rather sue people than just prevent it at the source? Re:Stupid (Score:2) It's trivial to block linking by looking at the referrer field and only allowing access if it's empty or from npr.org. But npr.org doesn't want to block linking. They just want to be able to opt-in first. Re:Stupid (Score:2) Hmm... Couldn't you just glean these from the web server logs? Just a thought. Slightly off topic... (Score:2, Insightful) I guess the web pages I put up when my wife was pregnant with our first child was a sort of blog - I should get around to re-posting that somwehere, actually... but as a geek with a wife, two kids, and a mortgage, I don't seem to have the lifestyle that would make good blog material anymore. ----- Let "them" know you're not a terrorist [cafepress.com] Re:Slightly off topic... (Score:2) Hey, you violated their policy! (Score:4, Funny) Wait... I just deep linked to a link prohibiting deep links! Ack! My brain! Kinda Odd (Score:5, Insightful) Re:Kinda Odd (Score:2, Informative) Re:Kinda Odd (Score:2) Just sit back and watch the collection become like so much swiss cheese. License (Score:2) Freedom of Speech (Score:5, Funny) Just ask 2600. whoops Re: That's not entirely true (Score:3, Informative) "It's a basic right for someone to be able to publish publically available information, such as a universal resource locator." That's not entirely true. There have actually been court cases where they have ruled that linking to a URL can be infringing. Some of these include Starbucks, Religious Technology Center v. Netcom On-Line Communication Services, and US Intellectual Reserve Inc vs. Utah Lighthouse Ministry Inc. Here's a good article about the topic [domainnotes.com]. Re:Freedom of Speech (Score:2) This is clearly a case of freedom of speech. Yep, NPR can put anything they want in their policies. Enforcing it, on the other hand.... taxpayer-funded information (Score:2) Well, for one thing, they're not taxpayer-funded, aside from a couple of percent from competitive grants. For another thing, even if they were taxpayer-funded, this would hardly a unique example of access limitations to taxpayer-funded information. (I also think it's a really dumb thing for them to do, but your objection is a bit simplistic.) Re:taxpayer-funded information (Score:2) I'll concede that point, especially as I'm the only person I know who does not declare charitable gifts on my taxes, for precisely that reason. This begs the question, though, of whether that indirect funding necessarily entitles any taxpayer unlimited access to any information held by any organization that gets a tax break. If so, wouldn't that reasoning also apply to any corporations or other businesses that get any tax breaks? Your Taxes Pay Squat (Score:3, Insightful) Assuming you are a tax paying citizen, you should be informed that even if you pay $1000 (including withheld on the W2), less than half of a penny goes into supporting both public radio and television, and even including state taxes, you still haven't paid a full cent. The funneling of tax goes to stations in need of self-support on a case by case basis, everything else, from your favourite programmes to your favourite hosts are funded by people that pledge a donation during drives. You're probably not even paying enough for the cost of electricity to parse through the database and send a copy of the article to you. Additionally, there is a permit you may request for mirroring under most circumstance if you ever actually intend to go through with it (more so for those that actually would like to mirror, as I doubt you could). screw NPR (Score:2) With the power of anything [npr.org] I want [npr.org] from NPR's [npr.org] website. </sarcasm> :P Sounds like typical NPR retoric (Score:2, Interesting) NPR didn't speak up when the FCC was holding hearings asking for comments and conducting studies, they waited until after the FCC had made up it's mind to grant the frequencies, and then cried wolf, saying that they'd interfer with NPR's. The FCC said too little too late, and pointed to studies that were conducted showing contrary to NPR's unbased claim. So NPR lobbied congress and got them to stop the FCC. NPR has always been a control freak. There's nothing new about that. NOBODY LINK TO MY SITES (Score:2) [ctipowersolutions.com] [latechcenter.com] [ahmansonpet.net] [petscanarizona.net] i don't think they get it (Score:2) Legal policies and lawsuits are exactly the wrong approach to take. The whole point of web advertising is that you want as many people as possible to see the ads. If you forbid people to link to your site, even the front page (as NPR's policy seems to do), then you lose traffic and revenue. Not to mention the negative publicity that you'll get from web community sites (like No linking? Try and stop it (Score:2, Informative) Their "linking policy" will have absolutely no affect. Re:No linking? Try and stop it (Score:2) It's a matter of free speech. Is copyright law "categorically immune from challenge under the First Amendment?" That's yet to be decided [harvard.edu].. and the problem with that is what? (Score:2) I also don't see the problem. NPR is a public radio station. They aren't supported by advertising but by member contributions. If your bestofnpr.com has a nicer layout and causes more people to listen to their audio, all the better. If you make a dollar in the process (I doubt it), you will hopefully have the good sense of donating some money to them. Also, you should have the good sense of not using their trademark ("NPR") in your web address because that they can legally control. Re:Wondering why NPR might do this? (Score:2). Re:Wondering why NPR might do this? (Score:2) If they don't want you linking directly to their audio files, why not just *say* that? "Please don't link directly to the media files on this site; instead, link to the parent web pages which contain them. Thank you." Next time there is a pledge drive (Score:2) While we're on the subject, ever notice how many "commercials" there are on "commercial-free" NPR? I hope that the executive recruiters from the Corn Fairy (is that like the Tooth Fairy?) or whoever they are die long slow deaths. OK for linking, but framing I hate, too (Score:2) Re:OK for linking, but framing I hate, too (Score:2) Proof of an objectivist idea (Score:2) Google link:npr.org (Score:2, Interesting) N Public R (Score:2, Insightful) Why is censorship becoming the answer more and more rather than creativity? If they're worried about people bypassing adds and the like by direct linking to their media files, why not build ads into those files or just mention in those files that the content you are receiving is from a listner supported organization that needs your help if (and only if) you Spitefull fooey [npr.org] Re:N Public R (Score:2) #include <MHO.h> I think the 'P' in NPR might now mean "public" as in "public toilet" and "public housing". For something that's "public", it sure has it's own agenda... Re:Link me, but don't frame me. (Score:2) Side effect: anti-framing scripts will sometimes crash browsers (even with javascript disabled!) on YOUR site, preventing them from reading YOUR content entirely. Proof of this claim, please? Maybe I haven't tested enough user agents, but simple, direct "frame-breaker" scripts have never crashed anything on my tests. I'm mildly calling bullsh!t until I see a little evidence forthcoming on this matter. Your claim may be true (please prove me wrong), but it's very, very fuzzy. - skeptical Re:Link me, but don't frame me. (Score:2) Presumably one could sue or prosecute under existing copyright/plagiarism laws, if necessary. More questions: at what point does framing stop being "fair use" and start being plagiarism? Is framing one page from a site "fair use", or does that constitute stealing an entire document (because it perforce takes the whole page, barring some clever Perl script of course)?? Or would it be more like quoting one entire page out of a book (considering a book and a website as equivalent, publication-wise), thus possibly legally "fair use"?? Quite a can of worms, for sure. Still, suing over *deep-linking* makes as much sense as suing over a footnote that refers to a specific page in a book. What, should footnotes only give the book's title or table of contents (equivalent to a website's root page), and make the poor user root up the relevant page themselves?? Kinda defeats the purpose, eh? The Linking Form has a comments field... (Score:2) Instead of flooding the ombudsman's mailbox with outraged email. Why doesn't the word get spread to simply fill out the form, and leave your negative comments in there? Dear NPR, KPMG, and others... (Score:2) If you don't want just anyone linking to your web site, just make the initial page a dead end that requires a password protected account to gain access to the deeper pages. And make those all pages dynamic to that deep linking would be a waste of time. Either that or get your heads screwed straight and learn how the Web is supposed to work. And finally, for NPR: IANAL but I suspect that you'd lose if you wanted to pursue enforcing your linking policy via the courts. At best you could just jeopardize your public funding. If I'm not mistaken, the ``P'' stands for Public, right? Not Private (as in club). These organizations crack me up.. Too much irony to bear! (Score:2) The other irony is, if everyone filled out those damn requests to link to NPR's site, NPR would be so deluged with such requests that they would quickly abandon the policy. No linking or framing? (Score:2) In NPR's defence (Score:3, Insightful) What about The New York Times site? (free reg req'd, blah, blah) Their site is often linked to from Ever listen to NPR? Hear any ads? See any on their website? Even our precious so follow their rules (Score:2)? Re:bad news for the Internet? (Score:2) Plenty of suits have been settled, but I can't recall ever hearing a court actually rule on this. Re:bad news for the Internet? (Score:2) How about Ticketmaster vs. Tickets.com [wired.com]. The judge in this case ruled "Hyperlinking does not itself involve a violation of the Copyright Act. There is no deception in what is happening. This is analogous to using a library's card index to get reference to particular items, albeit faster and more efficiently. Re:bad news for the Internet? (Score:2, Funny) But hyperlinks are one-directional pointers from other sites. Why do they get to dictate which pointers other people choose to put in their sites? If they want control over incoming links, they should create their own text markup language, network protocol and browsers that only support bidirectional linking. They can publish their site on their new network and link up with like-minded content providers. Who knows, it could be the killer app of the new millenium. (But I doubt it.) Re:Isn't NPR Taxpayer Funded (Score:2) Re:and..... (Score:2) If you dont want people linking to you, use a secure system to prevent it. Otherwise, shut up. Re:and..... (Score:2) Its called "FAIR USE". One of things it allows is for people to INDEX AND CATALOG other peoples copyrighted information. A link is the same thing as an item in an Index or a catalog of articles. It is a protected use of copyrighted material! Second, preventing "framing" is fine and dandy. Courts have upheld that. Its trivial to implement this via source code. Third, submitting to filling out a non-automatic form is goddamn silly *and* chilling. I write a paper for a class and reference NPR with a footnote, do I need to get permission to do that? NO WAY. The web is the same exact thing. Re:Anyone Complaining are the Unfair (Score:3, Insightful) Regardless of who owns the content, regardless of who paid for it, manages it, or how it is published, all copyrighted works are subject to a legal premise called "Fair Use". "Fair Use" has been tested time and time again in all levels of federal, state, and local courts. It is a rock upon which copyright is founded. Regardless of what license or prohibitions are put upon a copyrighted work, they do not ever void the precept known as "Fair Use". Fair User specifically allows - allows! - the use of bits and pieces of copyrighted information. One of the explicit allowances is for the purpose of "indexing". Creating an index or catalog is critical to all management of information, whether digital or otherwise. Without this exception, book authors could prevent libraries from listing thier works in card catalogs, because the book title and 10-20 word description would be "copyrighted" and reproducing it would be a violation of copyrights of the author. Luckily, the Founding Dads and the Courts have realized that this is absurd - that it harms no one to catalog and index information - and if a small bit of "right to copy" is granted, well, so be it. Additionally, fair use applies to "footnotes" and "endnotes". If I write a research paper, or hell, even a fun little magazine article, I can reference other works, by name, page, by sentence if needed. That is also protected. And this is what NPR wants to take down. Slashdot is a publication. When Cringley writes a new article, we might like to know about it. Someone writes an abstract and then references it with a link. See how that works? NPR still owns the content (or Cringely, or whoever). And Slashdot is permitted by law to "use it fairly", by providing readers a reference to it. Now, lets say, for example, that NPR wants to limit resources and whatnot. Okay, fine. There are technological solutions to the problem - as well as appeals to peoples sensibilities - that can help. But this argument is moot anyways, because the point is as strawman. Prevent deep linking *only increases* the needed resources. Making me click the homepage, then a second, then a third, then a fourth page requires multiple times more resources than just sending me straight to where I want to go. All in all, deep linking, if such a thing can be claimed to exisit, is a protected form of speech and should not be limited. And, if some place chooses to limit it, someone will work around that limit (and rightfully so). What Nonsense! (Score:2) This is, of course, nonsense! How do you explain Mozart, Beethoven, Sir Isaac Newton, Galileo, Descartes, etc... What intellectual property rights did they have beside general societal rules against plagiarism? You, sir, are a fool in a foolish world. You are forgiven though, because we are all fools to one degree or another. Re:What I did (Score:2) Prolly just some kiddie. Sounds to me like he's making threats against you, or at least your site's connectivity. I think you have more legal grounds for a "suite" against him than he has against you. Again, with all the spelling errors and immature language, it's probably Chris' little cousin or something. Even better: ASK them for permission. (Score:3, Interesting) -russ Re:Worse than 'brutally stupid' (Score:3, Funny) This part is interesting: "Fowl"? What does Calumet City have against content about birds?
http://slashdot.org/story/02/06/19/1438200/blogspace-vs-npr
CC-MAIN-2014-52
refinedweb
4,071
74.08
Re: Create a WebControl in the CodeBehind - From: Peter Bromberg [C# MVP] <pbromberg@xxxxxxxxxxxxxxxxxxx> - Date: Fri, 10 Nov 2006 13:46:02 -0800 Yeah, no way to avoid that because of the way ASP.NET monitors files and the bin folder. However, as a single assembly WAP project I bet it would restart faster. Peter -- Co-founder, Eggheadcafe.com developer portal: UnBlog: "jay@xxxxxxxxxxxxxxx" wrote: My first response didn't make it through.. Anyway... Thanks for the information, but that's not quite what I'm looking for. Currently the class is in a separate project. Each time I recompile the project, it is copied to the bin folder. Each time it is copied to the bin folder, the web app restarts. There's then a huge delay as it starts up. Once the delay is over, I log back in and go to my page. If we can get this to work in the code behind, simply refreshing the page works. It only recompiles the one page. This worked great when i manually added the control and got me most of the way through. Perhaps I should have said I want to avoid recompiles and restarts. Thanks for your help. Jay Peter wrote: Aye, that's the Catch-22 isn't it? What I would consider doing is to switch over to the new Web Application project model, and have your control as a project within the solution. Then, after it's deployed, all you need to do is build the newest version of your control and copy over it's assembly to the bin folder of your site. Presto! No Recompilations. Peter -- Co-founder, Eggheadcafe.com developer portal: UnBlog: "jay@xxxxxxxxxxxxxxx" wrote: Greetings ASP.NET 2.0 I'm developing a webcontrol. The class will ultimately end up in a webcontrols DLL, but for development purposes, I want to put the class in a code-behind. This is so that I can work on the class without recompiling and deploying a DLL, and waiting for the website to recompile. The website takes a long time to load after a recompile. I'm trying to avoid that delay. Making my changes in the code behind accomplishes that. public partial class TestPage_aspx : Page { .... } namespace howdy { public class TestControl : WebControl { .... } } Initially, I put a place holder on the page, then created an instance of TestControl and added it to the place holder. Now that I'm dealing with the postbacks, etc, I'd rather do it declaratively. <whatever:TestControl I'm having trouble registering the control on the page, though. <%@ Register Namespace="howdy" Assembly="???" TagPrefix="whatever" %> If that's the proper way to do it, what should I enter for Assembly? (I think it would be APP_CODE if I put it in the app_code folder, but that too causes a recompile, so I'm avoiding it). - References: - Create a WebControl in the CodeBehind - From: jay - Re: Create a WebControl in the CodeBehind - From: jay - Prev by Date: Re: Problems compiling/running ASP.NET AJAX sample pages... - Next by Date: Re: Urgent please - Previous by thread: Re: Create a WebControl in the CodeBehind - Next by thread: Print HTML Report Page or Print multiple application pages - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2006-11/msg01555.html
crawl-002
refinedweb
535
66.54
yes, here is a completed program, that I'm also working on. import javax.swing.*;public class CountEven {/* This program reads a positive integer from the user.It checks if the number is XXXXX or true also checks the number the user enteredis between 1 and 1000*/public static void main(String[] args) {String ans;//user enters numberans = JOptionPane.showInputDialog(null, "Enter a number");int input= Integer.parseInt(ans);input = input % 2;String answer = "False";if(input == 0){answer = "True";}//end ifJOptionPane.showMessageDialog(null, answer);String ans2;ans2 = JOptionPane.showInputDialog(null, "Is This Number Between 1 and 1000?");int num= Integer.parseInt(ans);num = num % 2;if(num == 0);if(num <1000);String Ans2 = "true";if(num >1001);ans2 = "False";{}//end ifJOptionPane.showMessageDialog(null, answer);} // end main()} // end class CountDivisors recompiled recieved correct response, thanks, XXXXX XXXXX included $20 bonus, it should have posted $40 ,but I added bonus anyway My account shows 2 accepts for the one question plus the $20 bonus I have one problem I can used help on Write a Java program that meets the following requirements: Declare a method to determine whether an integer is a prime number Use the following method declarations:public static Boolean isPrime (int num) An integer greater than 1 is a prime number if its only divisor is 1 or itself. For example, isPrime (11) returns true, and isPrime (9) returns false. Us the isPrime method to find the first thousand prime numbers and display every ten prime numbers in a row, as follows: 2 3 5 7 11 13 17 19 23 2931 37 41 43 47 53 59 61 67 6173 79 83 89 97 ...... Important Notes:The input and output must use JOptionPane dialog and display boxes. the bill shows two accepts for $40 and the $20 bonus, thanks for help Partha ,My brother all is well Thanks Reyes
http://www.justanswer.com/computer-programming/1r50q-write-java-program-will-read-integers-find-total.html
CC-MAIN-2014-23
refinedweb
312
60.24
Raspberry Pi port, piCore-8.0 is available! #include <vga.h> Your best bet is probably building the version from the link in reply#9. Hi eltoneIt's not in the repository. Go to: click on the ZIP button to download the zipped package. libvga.so.1.* and libvgagl.so.1.* will be created when you compile svgalib but w_n_o_$ is my dialect. Hi eltoneQuotebut w_n_o_$ is my dialect.I don't know what that means.I don't think it really matters where you unzip it. Create a subdirectory to work in, copy the zip file there, andunzip it.I would just try make install, if it does something you don't like, you can reboot and the system will be as it was before. Create a subdirectory to work in, copy the zip file there, andunzip it. tc@box:~/svgalib$ unzip -d svgalib-1-master.zipBusyBox v1.20.2 (2012-08-07 01:31:01 UTC) multi-call binary.Usage: unzip [-opts[modifiers]] FILE[.zip] [LIST] [-x XLIST] [-d DIR]Extract files from ZIP archives -l List archive contents (with -q for short form) -n Never overwrite files (default) -o Overwrite -p Send output to stdout -q Quiet -x XLST Exclude these files -d DIR Extract files into DIRtc@box:~/svgalib$ Hi Rich,2b) now to unzip, should I use CLI or terminal? What is the preferred way to select CLI and what syntax returns to Desktop from CLI? I would just try make install, if it does something you don't like, you can reboot and the system will be as it was before. tc@box:~/svgalib$ make installmake: *** No rule to make target `install'. Stop.tc@box:~/svgalib$ I understand AraLinux SRC will compile on the latest Slackware build: I've been told CLI is much better than terminal! I use terminal 100% w/TCP, since it's a simple click on the terminal icon. Obviously, 'make install' does not compile svgalib. What can be missing?
http://forum.tinycorelinux.net/index.php?topic=15025.15
CC-MAIN-2019-51
refinedweb
330
67.25
. , MSSQL, PostgreSQL the drivers web2py can use: sqlite3, pymysql,.sqlite')) Using the DAL "stand-alone" The DAL can be used in a non-web2py environment via from pydal import DAL, Field DAL constructor Basic use: >>> db = DAL('sqlite://storage.sqlite') The database is now connected and the connection is stored in the global variable db. At any time you can retrieve the connection string. >>> db._uri sqlite://storage.sqlite and the database name >>>..sqlite',"): ndb. In the MySQL connection string, the ?set_encoding=utf8mb4 at the end sets the encoding to UTF-8 and avoids an Invalid utf8 character string: error on Unicode characters that consist of four bytes, as by default, MySQL can only handle Unicode characters that consist of one to three bytes. [mathiasbyensbe]. Some times you may need to generate SQL as if you had a connection but without actually connecting to the database. This can be done with db = DAL('...', do_connect=False) db = DAL('...', db_codec='latin1') Otherwise you'll get UnicodeDecodeError tickets. Connection pooling A common second and by default. The number of attempts is set via the attempts parameter. Lazy Tables setting lazy_tables = True provides a major performance boost. See below: lazy tables Model-less applications Using web2py's model directory for your application models is very convenient and productive. With lazy tables and conditional models, performance is usually acceptable even for large applications. Many experienced developers use this is production environments. However, it is possible to define DAL tables on demand inside controller functions or modules. This may make sense when the number or complexity of table definitions overloads the use of lazy tables and conditional models. This is referred to as "model-less" development by the web2py community. It means less use of the automatic execution of Python files in the model directory. It does not imply abandoning the concept of models, views and controllers. Web2py's auto-execution of Python code inside the model directory does this for you: - models are run automatically every time a request is processed - models access web2py's global scope. Models also make for useful interactive shell sessions when web2py is started with the -M commandline option. Also, remember maintainability: other web2py developers expect to find model definitions in the model directory. To use the "model-less" approach, you take responsibility for doing these two housekeeping tasks. You call the table definitions when you need them, and provide necessary access to global scope via the current object (as described in Chapter 4). For example, a typical model-less application may leave the definitions of the database connection objects in the model file, but define the tables on demand per controller function. The typical case is to move the table definitions to a module file (a Python file saved in the modules directory). If the function to define a set of tables is called define_employee_tables() in a module called "table_setup.py", your controller that wants to refer to the tables related to employee records in order to make an SQLFORM needs to call the define_employee_tables() function before accessing any tables. The define_employee_tables() function needs to access the database connection object in order to define tables. This is why you need to correctly use the current object in the module file containing define_employee_tables() (as mentioned above). check_reserved tells the constructor to check table names and column names against reserved SQL keywords in target back-end databases. check_reserved.sqlite',. Database quoting and case settings Quoting of SQL entities are enabled by default in DAL, that is: entity_quoting = True This way identifiers are automatically quoted in SQL generated by DAL. At SQL level keywords and unquoted identifiers are case insensitive, thus quoting an SQL identifier makes it case sensitive. Notice that unquoted identifiers should always be folded to lower case by the back-end engine according to SQL standard but not all engines are compliant with this (for example PostgreSQL default folding is upper case). By default DAL ignores field case too, to change this use: ignore_field_case = False To be sure of using the same names in python and in the DB schema, you must arrange for both settings above. Here is an example: db = DAL(ignore_field_case=False) db.define_table('table1', Field('column'), Field('COLUMN')) query = db.table1.COLUMN != db.table1.column Making a secure connection Sometimes it is necessary (and advised) to connect to your database using secure connection, especially if your database is not on the same server as your application. In this case you need to pass additional parameters to the database driver. You should refer to database driver documentation for details. For PostgreSQL with psycopg2 it should look like this: DAL('postgres://user_name:user_password@server_addr/db_name', driver_args={'sslmode': 'require', 'sslrootcert': 'root.crt', 'sslcert': 'postgresql.crt', 'sslkey': 'postgresql.key'}) where parameters sslrootcert, sslcert and sslkey should contain the full path to the files. You should refer to PostgreSQL documentation on how to configure PostgreSQL server to accept secure connections. Other DAL constructor parameters Database folder location folder sets the place where migration files will be created (see Migrations section in this chapter for details). It is also used for SQLite databases. Automatically set within web2py. Set a path when using DAL outside web2py. Default migration settings, that is available using the -S command line option (read more in Chapter 4). You need to choose an application to run the shell on, mind that database changes may be persistent. So be carefull and do NOT exitate to create a new application for doing testing instead of tampering with an existing one. Start by creating a connection. For the sake of example, you can use SQLite. Nothing in this discussion changes when you change the back-end engine. Note that most of the code snippets that contain the python prompt >>> are directly executable via a plain shell, which you can obtain using -PS command line options. Table constructor Tables are defined in the DAL via define_table. define_table signature The signature for define_table method is: define_table(tablename, *fields, **kwargs) It accepts a mandatory table name and an optional number of Field instances (even none). You can also pass a Table (or subclass) object instead of a Field one, this clones and adds all the fields (but the "id") to the defining table. Other optional keyword args are: rname, redefine, common_filter, fake_migrate, fields, format, migrate, on_define, plural, polymodel, primarykey, sequence_name, singular, table_class, and trigger_name, which are discussed below. For example: >>> db.define_table('person', Field('name')) <Table person (id, name)> It defines, stores and returns a Table object called "person" containing a field (column) "name". This object can also be accessed via db.person, so you do not need to catch the value returned by define_table. id: Notes about the primary key Do not declare a field called "id", because one is created by web2py anyway. Every table has a field called "id" by default. It is an auto-increment integer field (usually starting at 1) used for cross-reference and for making every record unique, so "id" is a primary key. (Note: the id counter which have a primary key under a different name. With some limitation, you can also use different primary keys using the primarykey parameter. plural and singular Smartgrid objects may need to know the singular and plural name of the table. The defaults are smart but these parameters allow you to be specific. Smartgrid is described in Chapter 7. redefine Tables can be defined only once but you can force web2py to redefine an existing table: db.define_table('person', Field('name')) db.define_table('person', Field('name'), redefine=True) The redefinition may trigger a migration if table definition changes. format: Record representation It is optional but recommended to specify a format representation for records with the format parameter..otherfield.representattribute for all fields referencing this table. This means that SQLTABLE will not show references by id but will use the format preferred representation instead. (Look at Serializing Rows in views section in this chapter to learn more about SQLTABLE.) rname: Real name and keyed tables section in this chapter. migrate, fake_migrate migrate sets migration options for the table. Refer to Migrations section in this chapter for details. table_class If you define your own Table class as a sub-class of pydal.objects.Table, you can provide it here; this allows you to extend and override methods. Example: from pydal.objects import Table class MyTable(Table): ... db.define_table(..., table_class=MyTable) sequence_name The name of a custom table sequence (if supported by the database). Can create a SEQUENCE (starting at 1 and incrementing by 1) or use this for legacy tables with custom sequences. Note that when necessary, web2py will create sequences automatically by default. trigger_name Relates to sequence_name. Relevant for some backends which do not support auto-increment numeric fields. sometableto (see Chapter 4) can help, but web2py offers a big performance boost via lazy_tables. This feature means that table creation is deferred until the table is actually referenced. Enabling lazy tables is made when initialising a database via the DAL constructor. It requires setting the lazy_tables parameter: DAL(..., lazy_tables=True)(fieldname, type='string', length=None, default=DEFAULT, required=False, requires=DEFAULT, ondelete='CASCADE', notnull=False, unique=False, uploadfield=True, widget=None, label=None, comment=None, writable=True, readable=True, searchable=True, listable=True, update=None, authorize=None, autodelete=False, represent=None, uploadfolder=None, uploadseparate=None, uploadfs=None, compute=None, filter_in=None, filter_out=None, custom_qualifier=None, map_none=None, rname=None) where DEFAULT is a special value used to allow the value None for a parameter. Not all of them are relevant for every field. length is relevant only for fields of type "string". uploadfield, authorize, and autodelete next. rnameprovides the field with a "real name", a name for the field known to the database adapter; when the field is used, it is the rname value which is sent to the database. The web2py name for the field is then effectively an alias. to True, then the file is stored in a blob field within the same table and the value of uploadfieldis the name of the blob field. This will be discussed in more detail later in the More on uploads section in this chapter. uploadfoldersets the folder for uploaded files. By default, an uploaded file goes into the application's "uploads/" folder, that is into os.path.join(request.folder, 'uploads')(this seems not the case for MongoAdapter at present). For example:will upload files to the "web2py/applications/myapp/static/temp" folder. Field(..., uploadfolder=os.path.join(request.folder, 'static/temp')) links to existing uploads. SFTP storage. You need to have PyFileSystem installed for this to work. uploadfsmust point to PyFileSystem. autodeletedetermines if the corresponding uploaded file should be deleted when the record referencing the file is deleted. For "upload" fields only. However, records deleted by the database itself due to a CASCADE operation will not trigger web2py's autodelete. The web2py Google group has workaround discussions.. searchabledeclares whether a field is searchable in grids ( SQLFORM.gridand SQLFORM.smartgridare described in Chapter 7). Notice that a field must also be readable to be searched. listabledeclares whether a field is visible in grids (when listing multiple records). representcan be None or can point to a function that takes a field value and returns an alternate representation for the field value. Examples: db.mytable.name.represent = lambda name, row: name.capitalize() db.mytable.other_id.represent = lambda oid, row: row.myfield db.mytable.some_uploadfield.represent = lambda val, row: A('get it', _href=URL('download', args=val)) filter_inand filter_outcan be set to callables for further processing of field's value. filter_inis passed the field's value to be written to the database before an insert or update while filter_outis passed the value retrieved from the database before field assignment. The value returned by the callable is then used. See filter_in and filter_out section in this chapter. custom_qualifieris a custom SQL qualifier for the field to be used at table creation time (cannot use for field of type "id", "reference", or "big-reference"). Field types:<type> list:<type> and contains section in this chapter. The json field type is pretty much explanatory. It can store any json serializable object. It is designed to work specifically for MongoDB and backported to the other database adapters for portability. blob fields are also special. By default, binary data is encoded in base64 before being stored into the actual database field, and it is decoded when extracted. This has the negative effect of using 33% more storage space than necessary in blob fields, but has the advantageof making the communication independent of back-end-specific escaping conventions. Run-time field and table modification Most attributes of fields and tables can be modified after they are defined: >>> db.define_table('person', Field('name',>> db.person.name.default = 'anonymous' notice that attributes of tables are usually prefixed by an underscore to avoid conflict with possible field names. You can list the tables that have been defined for a given database connection: >>> db.tables ['person'] You can query for the type of a table: >>> type(db.person) <class 'pydal.objects.Table'> You can access a table using different syntaxes: >>> db.person is db['person'] True You can also list the fields that have been defined for a given table: >>> db.person.fields ['id', 'name'] Similarly you can access fields from their name in multiple equivalent ways: >>> type(db.person.name) <class 'pydal.objects.Field'> >>> db.person.name is db.person['name'] True Given a field, you can access the attributes set in its definition: >>> db.person.name.type string >>> db.person.name.unique False >>> db.person.name.notnull False >>> see them later. A special method of the field object is validate and it calls the validators for the field. >>> db.person.name.validate('John') ('John', None) "sql.log". Notice that by default web2py uses the "app/databases" folder for the log file and all other migration files it needs. You can change this setting the folderargument to DAL. To set a different log file name, for example "migrate.log" you can dodb = DAL(..., adapter_args=dict(logfile='migrate.log')) The first argument of define_table is always the table name. The other unnamed arguments are the fields (Field). The function also takes an optional keyword argument called "migrate": db.define_table('person', ..., migrate='person.table') The value of migrate is the filename. There may not be two tables in the same application with the same migrate filename. The DAL class also takes a "migrate" argument, which determines the default value of migrate for calls to define_table. For example, db = DAL('sqlite://storage.sqlite',: some parse function when selecting records, most likely this is due to Notice you can pass a parameter list of "id" values of the inserted records. On the supported relational databases there is no advantage in using this function as opposed to looping and performing individual inserts but on Google App Engine NoSQL, there is a major speed advantage. commit and rollback The insert, truncate, delete, and update operations aren't actually committed until web2py issues the commit command. The create and drop operations may be executed immediately, depending on the database engine. Calls to web2py actions are automatically wrapped in transactions. If you executed commands via the shell, you are required to manually commit: >>> (pseudo code) :(). five optional arguments: placeholders, as_dict, fields, colnames, and as_ordered_dict.': val1_row1, 'field2': val2_row1}, {'field1': val1_row2, 'field2': val2_row2}] as_ordered_dict is pretty much like as_dict but the former ensures that the order of resulting fields (OrderedDict keys) reflect the order on which they are returned from DB driver: [OrderedDict([('field1', val1_row1), ('field2', val2_row1)]), OrderedDict([('field1', val1_row2), ('field2', val2_row2)])].sqlite') declare the auto-increment field with 'id' type (that is using FIeld('...', 'id')).. Currently keyed tables are only supported for DB2, MSSQL, Ingres and Informix, but others engines will be/to/file')) In the case of an "upload" field, the default value can optionally be set to a path (an absolute path or a path relative to the current app folder), the default value is then assigned to each new record that does not specify an image. Notice that this way multiple records may end to reference the same default image file and this could be a problem on a Field having autodelete enabled. When you do not want to allow duplicates for the image field (i.e. multiple records referencing the same file) but still want to set a default value for the "upload" then you need a way to copy the default file for each new record that does not specify an image. This can be obtained using a file-like object referencing the default file as the default argument to Field, or even with: Field('image', 'upload', default=dict(data='<file_content>', filename='<file_name>')) Normally an insert is handled automatically via a SQLFORM or a crud form (which is a SQLFORM) but occasionally you already have the file on the filesystem and want to upload it programmatically. This can be done in this way: with open(filename, 'rb') as stream: db.myfile.insert(image=db.myfile.image.store(stream, filename)) It is also possible to insert a file in a simpler way and have the insert method call store automatically: with open(filename, 'rb') as stream:')) with open(filename, 'rb') as stream: db.myfile.insert(image=db.myfile.image.store(stream, filename), image_file=stream.read()) The retrieve method does the opposite of store. When uploaded files are stored on filesystem (as in the case of a plain Field('image', 'upload')) the code: row = db(db.myfile).select().first() (filename, fullname) = db.myfile.image.retrieve(row.image, nameonly=True) retrieves the original file name (filename) as seen by the user at upload time and the name of stored file (fullname, with path relative to application folder). While in general the call: (filename, stream) = db.myfile.image.retrieve(row.image) retrieves the original file name (filename) and a file-like object ready to access uploaded file data (stream). Notice that the stream returned by retrieveis a real file object in the case that uploaded files are stored on filesystem. In that case remember to close the file when you have done calling stream.close(). Here is an example of safe usage of retrieve: from contextlib import closing import shutil row = db(db.myfile).select().first() (filename, stream) = db.myfile.image.retrieve(row.image) with closing(stream) as src, closing(open(filename, 'wb')) as dest: shutil.copyfileobj(src, dest) Query, Set, Rows Let's consider again the table defined (and dropped) previously and insert three records: >>> db.define_table('person', Field('name')) <Table person (id,.id, row.name ... 1 Alex 2 Bob 3).select(): ... print row.id, row.name ... 1 Alex 2 Bob 3 Carl and web2py understands that if you ask for all records of the table person rows[i].name and enable, instead, the less compact notation: rows[i].person.name Yes this is unusual and rarely needed. Row objects also have two important methods: row.delete_record() and row.update_record(name="new value") Using an iterator-based select for lower memory use Python "iterators" are a type of "lazy-evaluation". They 'feed' data one step at time; traditional Python loops create the entire set of data in memory before looping. The traditional use of select is: for row in db(db.table).select(): ... but for large numbers of rows, using an iterator-based alternative has dramatically lower memory use: for row in db(db.table).iterselect(): ... Testing shows this is around 10% faster as well, even on machines with large RAM... Note: this delete shortcut syntax does not currently work if versioning is activated You can insert records: db.mytable[None] = dict(myfield='somevalue') It is equivalent to db.mytable.insert(myfield='somevalue') and it creates a new record with field values specified by the dictionary on the right hand side. Note: insert shortcut was previously db.table[0] = .... It has changed in PyDAL 19.02 to permit normal usage of id 0._id', 'reference person')) and a simple select from this table: things = db(db.thing).select() which is equivalent to things = db(db.thing._id != None)_id.name Here thing.owner_id expression person.thing is a shortcut for db(db.thing.owner_id ==olambda])}} For working with multiple rows, SQLFORM.grid and SQLFORM.smartgrid are preferred to SQLTABLE because they are more powerful. Please see Chapter 7. orderby, groupby, limitby, distinct, having, orderby_on_limitby, join, left, cache The select command takes a number of optional arguments. orderby, to overcome this limit, sorting can be accomplished on selected rows:import random rows = db(...).select().sort(lambda row: random.random()) You can sort the records according to multiple fields by concatenating them with a "|": >>> for row in db().select(db.person.name, orderby=db.person.name|db.person.id): ... print row.name ... Alex Bob Carl groupby, having. distinct limitby With limitby=(min, max), you can select a subset of the records from offset=min to but not including offset=max. In the next example we select the first two records). join, left These are involved in managing one to many relations. They are described in Inner join and Left outer join sections respectively. cache, cacheable An example use which gives much faster selects is: rows = db(query).select(cache=(cache.ram, 3600), cacheable=True) Look at Caching selects section in this chapter, to understand what the trade-offs are. Logical operators Queries can be combined using the binary AND operator " &": >>> rows = db((db.person.name=='Alex') & (db.person.id > 3)).select() >>> for row in rows: print row.id, row.name >>> len(rows) 0 and the binary OR operator " |": >>> rows = db((db.person.name == 'Alex') | (db.person.id > 3)).select() >>> for row in rows: print row.id, row.name 1 Alex You can negate a sub-query inverting its: >>> db(db.person.name != 'William').count() 3: >>> db(db.person).isempty() False You can delete records in a set: >>> db(db.person.id > 3).delete() 0 The delete method returns the number of records that were deleted. And you can update all records in a set by passing named arguments corresponding to the fields that need to be updated: >>> db(db.person.id > 2).update(name='Ken') 1 The update method returns the number of records that were updated.() case An expression can contain a case clause for example: >>> condition = db.person.name.startswith('B') >>> yes_or_no = condition.case('Yes', 'No') >>> for row in db().select(db.person.name, yes_or_no): ... print row.person.name, row[yes_or_no] # could be row(yes_or_no) too ... Alex No Bob Yes Ken No update_record web2py also allows updating a single record that is already in memory using update_record >>> row = db(db.person.id == 2).select().first() >>> row.update_record(name='Curt') <Row {'id': 2L, : >>> row = db(db.person.id > 2).select().first() >>> row.>> row.update_record() # saves above change <Row {'id': 3L, 'name': 'Philip'}> Note, you should avoid using row.update_record()with no arguments when the rowobject contains fields that have an updateattribute (e.g., Field('modified_on', update=request.now)). Calling row.update_record()will retain all of the existing values in the rowobject, so any fields with updateattributes will have no effect in this case. Be particularly mindful of this with tables that include auth.signature.: db(db[tablename]._id == id).update(**{fieldname:value}) Notice we used table._id instead of table.id. In this way the query works even for tables with a primary key field with type other than "id". first and Given a Rows object containing records: rows = db(query).select() first_row = rows.first() last_row = rows.last() are equivalent to first_row = rows[0] if len(rows) else None last_row = rows[-1] if len(rows) else None Notice, first() and last() allow you to obtain obviously the first and last record present in your query, but this won't mean that these records are going to be the first or last inserted records. In case you want the first or last record inputted in a given table don't forget to use orderby=db.table_name.id. If you forget you will only get the first and last record returned by your query which are often in a random order determined by the backend query optimiser.! Combining rows Rows objects can be combined at the Python level. Here we assume: >>> print rows1 person.name Max Tim >>> print rows2 person.name John Tim You can do union of the records in two sets of rows: >>> rows3 = rows1 + rows2 >>> print rows3 person.name Max Tim John Tim You can do union of the records removing duplicates: >>> rows3 = rows1 | rows2 >>> print rows3 person.name Max Tim John You can do intersection of the records in two sets of rows: >>> rows3 = rows1 & rows2 >>> print rows3 person.name Tim find, exclude, sort Some times you need to perform two selects and one contains a subset of a previous select. In this case it is pointless to access the database again. The find, exclude and sort objects allow you to manipulate a Rows object')) <Table person (id, name)> >>> db.person.insert(name='John') 1 >>> db.person.insert(name='Max') 2 >>> db.person.insert(name='Alex') 3 >>> rows = db(db.person).select() >>> for row in rows.find(lambda row: row.name[0]=='M'): ... print row.name ... Max >>> len(rows) 3 >>> for row in rows.exclude(lambda row: row.name[0]=='M'): ... print row.name ... Max >>> Sort takes an optional argument reverse=True with the obvious meaning. The find method if. The selection criteria in the example above is a single field. It can also be a query, such as db.person.update_or_insert((db.person.name == 'John') & (db.person.birthplace == 'Chicago'), name='John', birthplace='Chicago', pet='Rover').errors. ret.errors holds a key-value mapping where each key is the field name whose validation failed, and the value of the key is the result from the validation error (much like form.errors). ret.updated and errors will be in ret.errors. smart_query (experimental) There are times when you need to parse a query using natural language such as name contains m and age greater than 18 The DAL provides a method to parse this type of queries: search = 'name contains_query'])) <Table item (id, unit_price, quantity, total_price)> >>> rid = db.item.insert(unit_price=1.99, quantity=5) >>> db.item[rid] <Row {'total_price': '9.95', 'unit_price': 1.99, 'id': 1L, 'quantity': 5L}> Notice that the computed value is stored in the db and it is not computed on retrieval, as in the case of virtual fields, described next.) (experimental) web2py provides a new and easier way to define virtual fields and lazy virtual fields. This section is marked experimental because.item.unit_price * row.item: db.item.discounted_total = \ Field.Method(lambda row, discount=0.0: row.item.unit_price * row.item.quantity * (100.0 - discount / 100)) In this case row.discounted_total is not a value but a function. The function takes the same arguments as the function passed to the Method constructor except for row which is implicit (think of it as self for objects). The lazy field in the example above allows one to compute the total price for each item: for row in db(db.item).select(): print row.discounted_total() And it also allows to pass an optional discount percentage (say regular fields (length, default, required, etc). They do not appear in the list of db.table.fieldsand in older versions of web2py they require a special approach to display in SQLFORM.grid and SQLFORM.smartgrid. See the discussion on grids and virtual fields in Chapter 7. Old style virtual fields In order to define one or more virtual fields, you can also One to many relation To illustrate how to implement one to many relations with the DAL, define another table "thing" that refers to the table "person" which we redefine here: >>> db.define_table('person', ... Field('name')) <Table person (id, name)> >>> db.person.insert(name='Alex') 1 >>> db.person.insert(name='Bob') 2 >>> db.person.insert(name='Carl') 3 >>> db.define_table('thing', ... Field('name'), ... Field('owner_id', 'reference person')) <Table thing (id, name, owner_id)> Table "thing" has two fields, the name of the thing and the owner of the thing. The "owner_id" field is a reference field, it is intended that the field reference the other table by its id. A reference type can be specified in two equivalent ways, either: Field('owner_id', 'reference person') or: Field('owner_id', db.person) The latter is always converted to the former. They are equivalent except in the case of lazy tables, self references or other types of cyclic references where the former notation is the only allowed notation. ==_id)_id)) >>>_id1', 'reference person'), Field('owner_id2', 'reference person')) rows = db(db.person).select( join=[db.person.with_alias('owner_id1').on(db.person.id == db.thing.owner_id1), db.person.with_alias('owner_id2').on(db.person.id == db.thing.owner_id. Here is an example: >>> rows = db().select(db.person.ALL, db.thing.ALL, ... left=db.thing.on(db.person.id == db.thing.owner_id)) >>> parameter._id ... )')) <Table person (id, name)> >>> db.person.bulk_insert([dict(name='Alex'), dict(name='Bob'), dict(name='Carl')]) [1, 2, 3] >>> db.define_table('thing', ... Field('name')) <Table thing (id, name)> >>> db.thing.bulk_insert([dict(name='Boat'), dict(name='Chair'), dict(name='Shoes')]) [1, 2, 3] >>> db.define_table('ownership', ... Field('person', 'reference person'), ... Field('thing', 'reference thing')) <Table ownership (id, person, thing)> the existing ownership relationship can now be rewritten as: >>> db.ownership.insert(person=1, thing=1) # Alex owns Boat 1 >>> db.ownership.insert(person=1, thing=2) # Alex owns Chair 2 >>> db.ownership.insert(person=2, thing=3) # Bob owns Shoes 3 Now you can add the new relation that Curt co-owns Boat: >>> db.ownership.insert(person=3, thing=1) # Curt owns Boat too 4, 'has', row.thing.name ... Alex has Boat Alex has Chair Bob has Shoes Curt has-to-many relations is tagging, you can found an example of this in the next section. Tagging is also discussed in the context of the IS_IN_DB and IS_IN_SET validators on chapter 7. Tagging works even on database backends that do not support JOINs like the Google App Engine NoSQL.')) <Table product (id, name, colors)> >>> db.product.colors.requires = IS_IN_SET(('red', 'blue', 'green')) >>> db.product.insert(name='Toy Car', colors=['red', 'green']) 1 >>>') <Table tag (id, name)> >>> db.define_table('product', ... Field('name'), ... Field('tags', 'list:reference tag')) <Table product (id, name, tags)> >>> a = db.tag.insert(name='red') >>> b = db.tag.insert(name='green') >>> c = db.tag.insert(name='blue') >>> db.product.insert(name='Toy Car', tags=[a, b, c])')) <Table log (id, event, event_time, severity)>() >>> db.log.insert(event='port scan', event_time=now, severity=1) 1 >>> db.log.insert(event='xss injection', event_time=now, severity=2) 2 >>> db.log.insert(event='unauthorized login', event_time=now, severity=3) 3 like, ilike, regexp, startswith, maps to the LIKE word in ANSI-SQL. LIKE is case-sensitive in most databases, and depends on the collation of the database itself. The like method is hence case-sensitive but it can be made case-insensitive with db.mytable.myfield.like('value', case_sensitive=False) which is the same as using ilike db.mytable.myfield.ilike('value') web2py also provides some shortcuts: db.mytable.myfield.startswith('value') db.mytable.myfield.endswith('value') db.mytable.myfield.contains('value') which are roughly equivalent respectively to db.mytable.myfield.like('value%') db.mytable.myfield.like('%value') db.mytable.myfield.like('%value%') Remember that contains has a special meaning for list:<type> fields, as discussed in previous list:<type> and contains MySQL, Oracle, PostgreSQL, SQLite, and MongoDB (with different degree of support).() > 2018).select(): ... print row.event ... port scan xss injection unauthorized login belongs The SQL IN operator is realized via the belongs method which returns true when the field value belongs to the specified set (list.severity, row.event ... 1 port scan 2 xss injection 3 unauthorized login In those cases where a nested select is required and the look-up field is a reference we can also use a query as argument. For example: db.define_table('person', Field('name')) db.define_table('thing', Field('name'), Field('owner_id', 'reference person')) db(db.thing.owner_id.belongs(db.person.name == 'Jonathan')).select() In this case it is obvious that the nested select only needs the field referenced by the db.thing.owner_id_id = lazy) In this case lazy is a nested expression that computes the id of person "Jonathan". The two lines result in one single SQL query. sum, avg, min, max and len Previously, you have used the count operator to count records. Similarly, you can use the sum operator to add (sum) the values of a specific field from a group of records. As in the case of count, the result of a sum is retrieved via the storage object: >>> field's value. It is generally used on string or text fields but depending on the back-end it may still work for other types too (boolean, integer, etc). >>> for row in db(db.log.event.len() > 13).select(): ... print row.event ... unauthorized login Expressions can be combined to form more complex expressions. For example here we are computing the sum of the length of the event strings in the logs plus one: >>> exp = (db.log.event.len() + 1).sum() >>> db().select(exp).first()[exp] 43 function, COALESCE, for this. web2py has an equivalent coalesce method: >>> db.define_table('sysuser', Field('username'), Field('fullname')) <Table sysuser (id, username, fullname)> >>> db.sysuser.insert( >>> print exp SUM(COALESCE("sysuser"."points",'0'))(name='Susan') UPDATE "person" SET "name"='Susan'": with open('test.csv', 'wb') as dumpfile: dumpfile.write(str(db(db.person).select())) Or in Python 3: >>> open('test.csv', 'w', encoding='utf-8', newline='').write(str(db(db.person.id).select())) This is equivalent to rows = db(db.person).select() with open('test.csv', 'wb') as dumpfile: rows.export_to_csv_file(dumpfile) You can read the CSV file back with: with open('test.csv', 'rb') as dumpfile: db.person.import_from_csv_file(dumpfile) Or in Python 3: >>> rows = db(db.person.id).select() >>> rows.export_to_csv_file(open('test.csv', 'w', encoding='utf-8', newline='')) You can read the CSV file back with: >>> db.person.import_from_csv_file(open('test.csv', 'r', encoding='utf-8', newline='')): with open('somefile.csv', 'wb') as dumpfile: db.export_to_csv_file(dumpfile) To import: with open('somefile.csv', 'rb') as dumpfile: db.import_from_csv_file(dumpfile) Or in Python 3: To export: >>> db.export_to_csv_file(open('test.csv', 'w', encoding='utf-8', newline='')) To import: >>> db.import_from_csv_file(open('test.csv', 'r', encoding='utf-8', new by \r\n\r\n (that is two empty lines). The file ends with the line END The file does not include uploaded files if these are not stored in the database. The upload files stored on filesystem must be dumped separately, a zip of the "uploads" folder may suffice in most cases. once again the following model: db.define_table('person', Field('name')) db.define_table('thing', Field('name'), Field('owner_id', 'reference person')) # usage example if db(db.person).isempty(): nid = db.person.insert(name='Massimo') db.thing.insert(name='Chair', owner_id=nid) Each record is identified by an identifier and referenced by that id. If you have two copies of the database used by distinct web2py installations, the id is unique only within each database and not across the databases. This is a problem when merging records from different databases. In order to make records uniquely identifiable across databases, they must: - have a unique id (UUID), - have a last modification time to track the most recent among multiple copies, - reference the UUID instead of the id. This can be achieved changing the above model into: import uuid db.define_table('person', Field('uuid', length=64), Field('modified_on', 'datetime', default=request.now, update=request.now), Field('name')) db.define_table('thing', Field('uuid', length=64), Field('modified_on', 'datetime', default=request.now, update=request.now), Field('name'), Field('owner_id', length=64)) db.person.uuid.default = db.thing.uuid.default = lambda:str(uuid.uuid4()) db.thing.owner_id.requires = IS_IN_DB(db, 'person.uuid', '%(name)s') # usage example if db(db.person).isempty(): nid = str(uuid.uuid4()) db.person.insert(uuid=nid, name='Massimo') db.thing.insert(name='Chair', owner_id=nid) tablename in db.tables: table = db[tablename] # for every uuid, delete all but the latest items = db(table).select(table.id, table.uuid, orderby=~table.modified_on, groupby=table.uuid) for item in items: db((table.uuid == item.uuid) & (table.id != item.id)).delete() return dict(form=form) Optionally you should create an index manually to make the search by uuid faster.) Rows objects also have an xml method (like helpers) that serializes it to XML/HTML: >>> rows = db(db.person.id == db.thing.owner_id).select() >>> print rows.xml() <table> <thead> <tr><th>person.id</th><th>person.name</th><th>thing.id</th><th>thing.name</th><th>thing.owner_id</th></tr> </thead> <tbody> <tr class="w2p_odd odd"><td>1</td><td>Alex</td><td>1</td><td>Boat</td><td>1</td></tr> <tr class="w2p_even even"><td>1</td><td>Alex</td><td>2</td><td>Chair</td><td>1</td></tr> <tr class="w2p_odd odd"><td>2</td><td>Bob</td><td>3</td><td>Shoes</td><td>2</td></tr> </tbody> </table> TAGhelper (described in Chapter 5) and the Python syntax *<iterable>allowed in function calls: >>> rows = db(db.person).select() >>> print TAG.result(*[TAG.row(*[TAG.field(r[f], _name=f) for f in db.person.fields]) for r in rows]) <result> <row><field name="id">1</field><field name="name">Alex</field></row> <row><field name="id">2</field><field name="name">Bob</field></row> <row><field name="id">3</field><field name="name">Carl</field></row> </result> Data representation The Rows.export_to_csv_file method rows = db(query).select() with open('/tmp/test.txt', 'wb') as oufile: rows.export_to_csv_file(oufile, delimiter='|', quotechar='"', quoting=csv.QUOTE_NONNUMERIC) Which would render something similar to "hello"|35|"this is the text description"|"2013 memory. If the next call to this controller occurs in less than 60 seconds since the last database IO, it simply fetches the previous data from memory.: rows = db(query).select(cache=(cache.ram, 3600), cacheable=True) Self-Reference and aliases db.define_table('person', Field('name'), Field('father_id', 'reference person'), Field('mother_id', 'reference person')) Notice that the alternative notation of using a table object as field type will fail in this case, because it uses a table before it is defined: db.define_table('person', Field('name'), Field('father_id', db.person), # wrong! Field('mother_id', db['person'])) # wrong! In general db.tablename and 'reference tablename' are equivalent field types, but the latter is the only one allowed for self-references. When a table has a self-reference and you have to do join, for example to select a person and its father, you need an alias for the table. In SQL an alias is a temporary alternate name you can use to reference a table/column into a query (or other SQL statement). With web2py you can make an alias for a table using the with_alias method. This works also for expressions, which means also for fields since Field is derived from Expression. Here is an example: >>> fid, mid = db.person.bulk_insert([dict( >>> str(Father) 'person AS father' >>>')) <Table person (id, name, father, mother)> >>> fid, mid = db.person.bulk_insert([dict(name='Massimo'), dict(name='Claudia')]) >>> db.person.insert(name='Marco', father=fid, mother=mid) 3 >>> father = db.person.with_alias('father') >>> mother = db.person.with_alias('mother') >>>'), Field('gender')) <Table person (id, name, gender)> >>> db.define_table('doctor', db.person, Field('specialization')) <Table doctor (id, name, gender, specialization)> It is also possible to define a dummy table that is not stored in a database in order to reuse it in multiple other places. For example: signature = db.Table(db, 'signature', Field('is_active', 'boolean', default=True), Field('created_on', 'datetime', default=request.now), Field('created_by', db.auth_user, default=auth.user_id), Field('modified_on', 'datetime', update=request.now), Field('modified_by', db.auth_user, update=auth.user_id)) db.define_table('payment', Field('amount', 'double'), signature): >>> import json >>> db.define_table('anyobj', ... Field('name'), ... Field('data', 'text')) <Table anyobj (id, name, data)> >>> db.anyobj.data.filter_in = lambda obj: json.dumps(obj) >>> db.anyobj.data.filter_out = lambda txt: json.loads(txt) >>> myobj = ['hello', 'world', 1, {2: 3}] >>> aid = db.anyobj.insert(name='myobjname', data=myobj) >>> row = db.anyobj[aid] >>> row.data ['hello', 'world', 1, {'2': 3}] Another way to accomplish the same is by using a Field of type SQLCustomType, as discussed in next Custom Field types section. callbacks on record insert, delete and update a callback function by appending it to the corresponding list. The caveat is that depending on the functionality, the callback has different signature. This is best explained via some examples. >>> db.define_table('person', Field('name')) <Table person (id, name)> >>> def pprint(callback, *args): ... print "%s%s" % (callback, args) ... >>> db.person._before_insert.append(lambda f: pprint('before_insert', f)) >>> db.person._after_insert.append(lambda f, i: pprint('after_insert', f, i)) >>> db.person.insert(name='John') before_insert(<OpRow {'name': 'John'}>,) after_insert(<OpRow {'name': 'John'}>, 1L) 1L >>> db.person._before_update.append(lambda s, f: pprint('before_update', s, f)) >>> db.person._after_update.append(lambda s, f: pprint('after_update', s, f)) >>> db(db.person.id == 1).update(name='Tim') before_update(<Set ("person"."id" = 1)>, <OpRow {'name': 'Tim'}>) after_update(<Set ("person"."id" = 1)>, <OpRow {'name': 'Tim'}>) 1 >>> db.person._before_delete.append(lambda s: pprint('before_delete', s)) >>> db.person._after_delete.append(lambda s: pprint('after_delete', s)) >>> db(db.person.id == 1).delete() before_delete(<Set ("person"."id" = 1)>,) after_delete(<Set ("person"."id" = 1)>,) 1 As you can see: fgets passed the OpRowobject with data for insert or update. igets passed the id of the newly inserted record. sgets passed the Setobject used for update or delete. OpRow is an helper object specialized in storing (field, value) pairs, you can think of it as a normal dictionary that you can use even with the syntax of attribute notation (that is f.name and f['name'] are equivalent). firing other callbacks, which could cause an infinite loop. For this purpose there the Set objects have an update_naive method that works like update but ignores before and after callbacks. Database cascades Database schema can define relationships which trigger deletions of related records, known as cascading. The DAL is not informed when a record is deleted due to a cascade. So no *_delete callaback will ever be called as conseguence of a cascade-deletion. Chapter 9. It can also be done for each individual table as discussed below. Consider the following table: db.define_table('stored_item', Field('name'), Field('quantity', 'integer'), Field('is_active', 'boolean', writable=False, readable=False, default=True)) common_filter on this table that hides all records in table stored_item where the is_active field is set to False. The is_active parameter in the _enable_record_versioning method allows to specify the name of the field used by the common_filter to determine if the field was deleted or not. common_filters will be discussed in next Common filters section.. auth.define_tables() but before defining any other table, insert: db._common_fields.append(auth.signature) One field is special: request_tenant, you can set a different name in db._request_tenant. This field does not exist but you can create it and add it to any of your tables (or all of them): db._common_fields.append(Field('request_tenant', default=request.env.http_host, writable=False)) For every table with such a field, all records for all queries are always automatically filtered by: db.table.request_tenant == db.table.request_tenant.default and for every record inserted, this field is set to the default value. In the example above we have chosen: default = request.env.http_host this means we have chosen to ask our app to filter all tables in all queries with: db.table.request_tenant == request.env.http_host This simple trick allow us to turn any application into a multi-tenant application. Even though we run one instance of the application and we use one single database, when the application is accessed under two or more domains the visitors will see different data depending on the domain (in the example the domain name is retrieved from request.env.http_host). You can turn off multi tenancy filters using ignore_common_filters=True at Set creation time: db(query, ignore_common_filters=True) modified at runtime: db.blog_post._common_filter = lambda query: ... It serves both as a way to avoid repeating the "db.blog_post.is_public==True" phrase in each blog post search, and also as a security enhancement, that prevents you from forgetting to disallow viewing of non-public posts. In case you actually do want items left out by the common filter (for example, allowing the admin to see non-public posts), you can either remove the filter: db.blog_post._common_filter = None or ignore it: db(query, ignore_common_filters=True) Note that common_filters are ignored by the appadmin interface. Custom Field types Aside for using filter_in and filter_out, it is possible to define new/custom field types. For example, suppose that you want to define a custom type to store an IP address: >>> def ip2int(sv): ... "Convert an IPV4 to an integer." ... sp = sv.split('.'); assert len(sp) == 4 # IPV4 only ... iip = 0 ... for i in map(int, sp): iip = (iip<<8) + i ... return iip ... >>> def int2ip(iv): ... "Convert an integer to an IPV4." ... assert iv > 0 ... iv = (iv,); ov = [] ... for i in range(3): ... iv = divmod(iv[0], 256) ... ov.insert(0, iv[1]) ... ov.insert(0, iv[0]) ... return '.'.join(map(str, ov)) ... >>> from gluon.dal import SQLCustomType >>> ipv4 = SQLCustomType(type='string', native='integer', ... encoder=lambda x : str(ip2int(x)), decoder=int2ip) >>> db.define_table('website', ... Field('name'), ... Field('ipaddr', type=ipv4)) <Table website (id, name, ipaddr)> >>> db.website.insert(name='wikipedia', ipaddr='91.198.174.192') 1 >>> db.website.insert(name='google', ipaddr='172.217.11.174') 2 >>> db.website.insert(name='youtube', ipaddr='74.125.65.91') 3 >>> db.website.insert(name='github', ipaddr='207.97.227.239') 4 >>> rows = db(db.website.ipaddr > '100.0.0.0').select(orderby=~db.website.ipaddr) >>> for row in rows: ... print row.name, row.ipaddr ... github 207.97.227.239 google 172.217.11.174 reverse db = DAL('sqlite://storage.sqlite', folder='path/to/app/databases') i.e. import the DAL, connect and specify the folder which contains the .table files (the app/databases folder). To access the data and its attributes we still have to define all the tables we are going to access with db.define_table. db = DAL('sqlite://storage.sqlite', folder='path/to/app/databases', auto_import=True): >>>)) 1 >>> sp.insert(loc=geoLine((100, 100), (20, 180), (180, 180))) 2 >>> sp.insert(loc=geoPolygon((0, 0), (150, 0), (150, 150), (0, 150), (0, 0))) 3 Notice that rows = db(sp).select() Always returns the geometry data serialized as text. You can also do the same more explicitly using st_astext(): >>> print db).select(sp.id, dist) spatial.id,dist 1,2.0 2,140.714249456 3,1.0' \ -d ../gluon defines MSSQL3Adapter extends MSSQLAdapter MSSQL) IMAPAdapter extends NoSQLAdapter (experimental) MongoDBAdapter extends NoSQLAdapter (experimental) VerticaAdapter extends MSSQLAdapter (experimental) SybaseAdapter extends MSSQLAdapter (experimental) which override the behavior of the BaseAdapter. Each adapter has more or less this structure: class MySQLAdapter(BaseAdapter): # specify a driver, 'spatialite': SpatiaLiteAdapter, 'sqlite:memory': SQLiteAdapter, 'spatialite:memory': SpatiaLiteAdapter, 'mysql': MySQLAdapter, 'postgres': PostgreSQLAdapter, 'postgres:psycopg2': PostgreSQLAdapter, 'postgres2:psycopg2': NewPostgreSQLAdapter, 'oracle': OracleAdapter, 'mssql': MSSQLAdapter, 'mssql2': MSSQL2Adapter, 'mssql3': MSSQL3Adapter, 'mssql4' : MSSQL4Adapter, 'vertica': VerticaAdapter, 'sybase': SybaseAdapter, 'db2': DB2Adapter, 'teradata': TeradataAdapter, 'informix': InformixAdapter, 'informix-se': InformixSE:datastore+ndb': < 2012 does not support the SQL OFFSET keyword. Therefore the database cannot do pagination. When doing a limitby=(a, b) web2py will fetch the first a + b rows and discard the first a. This may result in a considerable overhead when compared with other database engines. If you're using MSSQL >= 2005, the recommended prefix to use is mssql3:// which provides a method to avoid the issue of fetching the entire non-paginated resultset. If you're on MSSQL >= 2012, use mssql4:// that uses the OFFSET ... ROWS ... FETCH NEXT ... ROWS ONLY construct to support natively pagination without performance hits like other backends. The mssql:// uri also enforces (for historical reasons) the use of text columns, that are superseeded in more recent versions (from 2005 onwards) by varchar(max). mssql3:// and mssql4:// should be used if you don't want to face some limitations of the - officially deprecated - text columns. db._adapter.types: if ' ON DELETE %(on_delete_action)s' in db._adapter.types[key]:) Oracle. Google NoSQL (Datastore) more efficient on Google NoSQL than on SQL databases.
http://web2py.com/book/default/chapter/06
CC-MAIN-2020-24
refinedweb
8,046
50.43
How to check if a key exists in s3 bucket using boto3? import boto3 import botocore s3 = boto3.resource(‘s3′) try: s3.Object(‘my-bucket’, ‘dootdoot.jpg’).load() except botocore.exceptions.ClientError as e: if e.response[‘Error’][‘Code’] == “404”: # The object does not exist. … else: # Something else has gone wrong. raise else: # The object does exist.Load() does a HEAD request for a single key, which is fast, even if the object in question is large or you have many objects in your bucket. Of course, you might be checking if the object exists because you are planning on using it. If that is the case, you can just forget about the load() and do a get() or download_file() directly, then handle the error case.
http://www.pro-tekconsulting.com/blog/how-to-check-if-a-key-exists-in-s3-bucket-using-boto3-2/
CC-MAIN-2019-35
refinedweb
126
69.48
Go Categories Science Math and Arithmetic Numerical Analysis and Simulation Answered Numerical Analysis and Simulation Parent Category: Math and Arithmetic The study of algorithms for problems related to continuous mathematics Subcategories Numerical Series Expansion 1 2 3 > Change the decimal number 234.365 to binary number system? 234=(1)128+(1)64+(1)32+(0)16+(1)8+(0)4+(1)2+(0)1: 11101010 2 0.365 = (0).5 + (1).25 + (0).125 + (1).0625 + ... =0.0101... 2 1.11010100101110101110000101..._2Ã2 7 (see link) continuous, or maybe... How do you write four-tenths in numerals? Expressed as a decimal fraction in its simplest form, 4/10 is equal to 0.4.\n Why is this distribution referred to as a geometric distribution? The geometric distribution is: Pr(X=k) = (1-p) k-1 p for k = 1, 2 , 3 ... A geometric series is a+ ar+ ar 2 , ... or ar+ ar 2 , ... Now the sum of all probability values of k = Pr(X=1) + Pr(X = 2) + Pr(X = 3) ... = p + p 2 +p 3 ... is a geometric series with a = 1 and the value 1... What is 6.384 rounded to the nearest 2 decimal places? 6.38 How do you find the roots of a polynomiyal? In numerical analysis finding the roots of an equation requires taking an equation set to 0 and using iteration techniques to get a value for x that solves the equation. The best method to find roots of polynomials is the Newton-Raphson method, please look at the related question for how it works. X plus y plus x equals? x + y + x = 2x + y What is the most dominant number system? decimal What is law of large numbers? Answer . when a probability experiment is repeated a large number of times, the relative frequency probability of an outcome will approach its theoretical probability.... Find the first 5 derivatives of cos x1 x Use Maclaurin series to 6 terms? Cos (aX) = 1 + bX, give value of X in terms of a and b. . Practical Problem at site: An arc shape insert- plate of which R was 1250 and D was 625mm and thus S = 2618 (chord length (L)= 2165), has been damaged and flattered. And site staff gives feedback that it is now D = 525 instead of 625 and... How can you determine the base of any number system? The answer depends on what information you have. Given only one number, all that can be said is that the base is larger than the largest digit appearing in the number. Equations will not help if there are no "carries". For example, For example, 10 + 12 = 22 is true in any base greater than or... How do you code a program to find kaprekar number? import java.io.*; class kaprekar { public static void main(String args[])throws IOException { BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter any number"); int n=Integer.parseInt(br.readLine()); int t=n,count=0; while(n!=0) { n=n... 34.872 to 1 significant figure? 34.872 rounded and chopped to 1 sig fig is 30. The previous answer 34.8 is wrong, 34.8 is 34.872 chopped to 3 sig figs. Any digit in a number that is not zero is classed as a significant figure e.g. your number above has 5 sig figs but if it was 30.072 it would only have 3 sig figs: 3, 7 and 2.... An example of iterative methods using jacobi and Gauss seidal method? This is gonna turn into a long answer, please bear with me. If you already know the algorithm's just skip to the examples. Iterative techniques for solving a linear system A x=b, where A is a square matrix with non-zero diagonal entries, start of by first rearranging the system into a form x ... Madras university old question papers? it is really impossible to get the model question papers but indiastudycentre has certain question papers for the B.Sc degrees, you can check there A number is divisible by another? A number is divisible by another when the remainder of the division is zero. What is the Taylor expansion? A Taylor expansion is a way of representing a function in terms ofa sum of its derivatives. Please see the link. You select 12 people from a group of 30 people that includes 12 men and 18 women what is the probability that the group will be 12 women? The probability that 12 randomly selected people from a group of 12 men and 18 women will all be women is (18 in 30) times (17 in 29) times (16 in 28) times (15 in 27) times (14 in 26) times (13 in 25) times (12 in 24) times (11 in 23) times (10 in 22) times (9 in 21) times (8 in 20) times (7 in 19)... When I was married 10 yrs ago my wife is the 6th member of the family Today my dad died and a baby born to me The average age of family during my marriage is same as today What s the age when dad died? 60 Father age is 50. today What is the square root of 54? Approximately 7.34847, rounded to 5dp. The square root of any number can be found through the Newton-Raphson and Secant fixed point iteration methods. See links below for more info. 7.3484692 Runge-Kutta 4 technique with shooting method? Since the questioner didn't specify which shooting method was being used, I'll explain the linear shooting method. First off, the shooting method is used to solve boundary-value problems, or BVPs, where instead of being given 2 initial conditions to solve a problem, we only have the value of the... What isTruncation error in numerical flow scheme? Truncation error is the error introduced when an series is shortened, i.e. "truncated", before it is complete. For instance, 1/3 is 0.333333333...etc., but we place limits on how many decimal digits to use, so that introduces an error. Another example is a large number, such as 2 40 -1. That... How can expansion be useful? When things expand, it can block certain things out like a machine How... What is the value of h in numerical analysis when doing differentiation? h, being the step size of an algorithm in numerical analysis, is always (b-a)/N where x is in the interval [a, b] and N is the number of iterations in the algorithm. What is the multiplication property shown here called 12 times 2 equals 2times 12? Commutative. What is the definition of numerical analysis? It is the study of algorithms that use numerical values for the problems of continuous mathematics. What 2 nine digit number over each other is close to pie? 314159265 / 100000000 = 3.14159265 How are harmonics related to the fundamental frequency? \n \n Normal \n 0 \n 21 \n \n \n false \n false \n false \n \n \n \n \n \n \n \n MicrosoftInternetExplorer4 \n \n \n\n The fundamental =\n1st harmonic is not an overtone! \n\n \n\n Fundamental\nfrequency = 1st harmonic. \n\n 2nd harmonic = 1st\novertone. \n\n 3rd harmonic = 2nd... How many ping pong balls fit in a 747? Assuming a diameter of 40mm and assuming hexagonal closest packed, 625 balls would fit in a cubic foot. Boeing's website lists passenger volume for 747-400 as 31,285 plus cargo. 5536+835 = 37656 cubic feet An estimate would be the number of balls in a cubic foot times volume = 23.5 million... What is 1 and 2 half times 5 equal? 2.5 or 5 over 2 The expansion of a binomial that involves a coefficient found by combinations? Is the binomial expansion. Why is lower percentage error better? I would have thought this blindingly obvious but no matter, a lower percentage error is better because it means your approximation to a solution is closer to the real answer than an approximation with a higher error. What is the difference between poisson distribution and poisson process? A poisson process is a non-deterministic process where events occur continuously and independently of each other. An example of a poisson process is the radioactive decay of radionuclides. A poisson distribution is a discrete probability distribution that represents the probability of events ... What made the number system We use? the Egyptians ----------------------------------------------------------------------------------- To be a little more detailed, the base-10 number system that we use is based simply off of our fingers. We have 10 fingers, and so when civilization was developing the concept of counting, it was... The concept of the Everett's formula for interpolation? Unfortunately I can't find any freely available information about the concept/theory/derivation of the Everett formula, I can find the equation fine but none of the theory related to it, and it's not covered in any of my text books that go well beyond my university course where I have also taken all... Write a program in C to implement Gauss Elimination Method? #include int main() { double matrix[10][10],a,b, temp[10]; int i, j, k, n; printf("Enter the no of variables: "); scanf("%d", &n); printf("Enter the agumented matrix:\n"); for(i = 0; i < n ; i++){ for(j = 0; j < (n+1); j++){ scanf("%lf", &matrix[i][j]); } } for(i = 0; i < n; i++){ for(j = 0; j < n... 'how many digits number system contains'? There are 10 digits in our number system. The symbols 0,1,2,3,4,5,6,7,8,and 9 are the digits used to create numbers. What does 1.5 centimeters look like? about the distance between the two lines below |..........| C program to solve equation in runge kutta method? using c language to implement Runge-Kutta method in numerical analysis What is the rate of convergence for the bisection method? The rate of convergance for the bisection method is the same as it is for every other iteration method, please see the related question for more info. The actual specific 'rate' depends entirely on what your iteration equation is and will vary from problem to problem. As for the order of... number does the letter A represent in hexadecimal system? 10 10 or 1010 2 . What does the number 556789329581 mean it has been in my head for six years? The fact that this number has been on your mind for the past six years is very likely to be a symptom of obsessive compulsive disorder. Please seek professional counseling as soon as possible, for your sake and the sake of those around you (which could be me). What is the sum of the infinite geometric series? The sum of the series a + ar + ar 2 + ... is a/(1 - r) for |r| < 1 an adherent point an accumulation point? No, not all adherent points are accumulation points. But all accumulation points are adherent points. If Jack is Izbj in a code then what is Mary in the same code? If "Jack" is "Izbj" in a code, then a likely candidate for that code is a trivial "rotate left one position" code. "Mary", in the same code, would be "Lzqx". A=Z, B=A, C=B, etc. Is there a number system other than real numbers? There are several that are especially important: . integers: ..., -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1,2, 3, 4, 5, ... . rational numbers: ie, numbers that can be written as quotientsof integers, such as 1/2, 7/8, etc. . irrational numbers: ie, numbers that cannot be written asquotients of... Find the error its impossible 1 2 3 4 56 7 8 9 10 11 12 13 14 15 16 17 18 19 20? the missing space between the 5 and 6. what is so impossible about that? are you just trying to be ironic, funny? How does the bisection method work when solving nonlinear equations? it works exactly the same as it does with linear equations, you don't need to do any differentiation or anything fancy with this method, just have to plug in values of x, so it shouldn't make a difference if the equation is linear or nonlinear. What is positional and nonpositional number system? The number system now in commonest use worldwide is positional.Consider a number such as 924.37. The position of eachdigit in the number indicates how significant it is. The digit 9represents 100s, the 2 10s, the 4 1s, and so on. The Roman number, or 'numeral' system, is non-positional. The... What are Charts time lines web diagrams and Venn diagrams examples of? graphical representations Relationship between fundamental frequency and harmonic in harmonic series? A harmonic frequency is a multiple of the fundamental. If the frequency is 60Hz, then the 2nd harmonic is (2 * 60) 120 Hz, the third is (3 * 60) 180 Hz, etc. Derive the moment generating function of the poisson distribution? The probability mass function (pmf, you should know this) of the Poisson distribution is . p(x)=((e -λ )*λ x )/(x!), where x= 0, 1, ........ Then you take the expected value of exp(tx), you should always keep in mind to find the moment generating function (mgf) you must always do (e tx... What are the Uses of Beta and gamma functions? Application in String theory in Quantum Mechanics How do you write 45000 in numeric numbers? 45,000 What are the 4 fundamental laws in mathematics? Answer . I don't know why there should be 4 laws (=axioms) specifically. In mathematics you can choose whatever system of axioms and laws and work your way with those. Even "logic" (propositional calculus) can be redefined in meaningful ways. the most commonly used system is Zermolo-Fraenkel... Why do you need pivoting in maths? You are given a system of n or more simultaneous linear equationsinvolving n unknowns. Pick one of the unknowns, called the pivotvariable. Find an equation in which it appears, called the pivotequation. What is the difference between the Monte Carlo method and Latin Hypercube Sampling? actually montecarlo is based on random selection (of cours randamness is expected to be random means to cover tjhe whole interval so the more the better )along the CDF(cumilative distribution function ) to extract the input that expected to keep the original distribution to some degree. in latin... What is the point of homeschooling? The Point of Homeschooling . Now that is a good question!. The "point of homeschooling" stems from your worldview. The concept of world view is central to the current firestorms surrounding education because it forms the basis for one's idea of a good life.. It is impossible to talk about... What is a example of convergence? Convergence of telecommunications What is convergence of probability? "Convergence in probability" is a technical term in relation to a series of random variables. Not clear whether this was your question though, I suggest providing more context. Relationship between Exponential and Poisson Distributions? Poisson distribution shows the probability of a given number ofevents occurring in a fixed interval of time. Example; if averageof 5 cars are passing through in 1 minute. probability of 4 carspassing can be calculated by using Poisson distribution. Exponential distribution shows the probability of... What is the culture difference between srilanka n UK? In Sri Lanka, there are less facilities than in England, but good behaviour and better rules ,where the children grow up with good manners.. Part of mathematics that studies number systems and number properties? If you mean properties of numbers such as 'six is triangular'and 'seven is prime' then I would offer number theory as ananswer. . If you mean that a certain subset of integers derived from theintegers by taking remainders modulo a prime form a field then Iwould offer abstract algebra. . If you mean... Who invented the number system based on the number 60? Quite probably the ancient Babylonians. How many digits are used in a binary number system and what are they? There are two digits in the binary number system. 0 and 1 In SciLab software how do you change the range of the x-axis in a plot without changing the domain of x in your functions of y? First of all you need get the access to the figure properties . this is done by gca () command. and then you need to change the data_bounds . a=gca() ;//get the current axesa.data_bounds=[0,-1;10,1.5]; C programming of Runge-Kutta method? PROGRAM :- /* Runge Kutta for a set of first order differential equations */ #include #include #define N 2 /* number of first order equations */ #define dist 0.1 /* stepsize in t*/ #define MAX 30.0 /* max for t */ FILE *output; /* internal filename */ void runge4(double x,... What is the difference between division and integer division? In integer division, you expect the result to be an integer. Anything left over will be quoted as a remainder. The more commonly used division (not integer division) will continue calculating decimals, up to the desired accuracy. Moment generating and the cumulant generating function of poisson distribution? The moment generating function is M(t) = Expected value of e^(xt) = SUM[e^(xt)f(x)] and for the Poisson distribution with mean a inf = SUM[e^(xt).a^x.e^(-a)/x!] x=0 inf = e^(-a).SUM[(ae^t)^x/x!] x=0 = e^(-a).e^(ae^t) = e^[a(e^t -1)] Nine times six equals? 54 Theory of law of large numbers? Yes, there is one. And your question is ... ? If average height for women is normally distributed with a mean of 65 inches and a standard deviation of 2.5 inches then approximately 95 percent of all women should be between what and what inches? A normal distribution with a mean of 65 and a standard deviation of 2.5 would have 95% of the population being between 60 and 70, i.e. +/- two standard deviations. Is the binomial distribution is a continuous distribution? No it is a "discrete" distribution because the outcomes can only be integers. Why do you need maths? mathematics allows the human brain to think logically and strategically. It means that we can solve problems we face, even if they are completely unrelated to maths itself. Mathematics is used in almost everything done in the average persons day. from using a computer to cooking to cutting a piece... Binary search iterative method over recurssive method? Iteration is more efficient. How many people play baseball? In the USA more than 40 million people play some form of baseball.This takes into account all ages and gender. Mens slow pitchsoftball accounts for up to 18 milliion players participating insome league each year. These numbers make baseball/softball the #1played in US. compared to basketball at... Real root of x3-2x plus 1 in newton raphson method? By x3 I assume that you mean x 3 . In which case f(x)=x 3 -2x+1, and f'(x)=3x 2 -2. Therefore our iteration formula is: x n+1 =x n - (x n 3 -2x n +1)/(3x n 2 -2) Starting with x 0 =0 we get: x 1 =0.5 x 2 =0.6 x 3 =0.617391304 x 4 =0.618033095 x 5 =0.618033988 x... What number system does the world use? Hindu-Arabic numeral system Can Poisson probability distributions be simulated? Yes. If X has Poisson distribution does aX plus b have Poisson Distribution? Yes. Derive recursion formula for sin by using Taylor's Series? the Taylor series of sinx What statement has a negative tone? he commandand me to help him with the lab report What is The expansion of a binomial that involves a coefficient found by combinations? Binomial Theorum Why are girls smart in math? because they just are What is the last number in the numerical system? There is no last or final number. The real number system isdesigned in such a way that the numbers go on indefinitely. What are the subset of the real number system? Irrational Numbers, Rational Numbers, Integers, Whole numbers, Natural numbers What are harmonics? Answer . Harmonics are multiples (thirds, fifths, etc) or divisions of frequencies. In radio, harmonics can be used carry additional signals on a single base frequency. It is the harmonics of an audio frequency that make a musical instrument unique. By damping a string at a half or third/fifth of... What is period in place value? 769 853 Dividing polynomials when the divisor is a polynomial of the second degree by method of synthetic division.? I can solve this question . But i think it is better to hold on . I want to register my finding with my name. Which math class is higher Investigation Math or MATH 7? math 7 How do you write in words 6.04? 6.04 is six and four hundredths How do you write 0.752 billion in numbers? 7,52000000 08 = 752 000 000 How is poisson distribution related to binomial distribution? If X and Y are i.i.d Poisson variables with lambda1 and lambda2 then, P (X = x | X + Y = n) ~ Bin(n, p) where p = lambda1 / lambda1 + lambda2 What is a sequence which is not convergent defined as? It could be divergent eg 1+1+1+1+... Or, it could be oscillating eg 1-1+1-1+ ... So there is no definition for a sequence that is not convergent except non-convergent. Define error relative error and absolute error give examples of each? Error is the term for the amount of difference between a value and it's approximation, and is represented by either an upper or lower case epsilon (E or ε) E abs , absolute error, is |x-x*| where x* is the approximate of x, and gives a value that shows how far away the approximate is as a... What is a multiplicative especially as it relates to a multiplicative inverse? Multiplicative means pertaing or related to the mathematicalopration known as multiplication. 1 2 3 >
http://www.answers.com/Q/FAQ/6261
CC-MAIN-2018-34
refinedweb
3,683
67.35
A they don’t know how the tax code works. The major failing seems to be an incomprehension regarding marginal tax rates, but people also seem to fall down on the matter of taxable income vs. gross income (i.e. how deductions can work for you!), how to apply tax credits, and other various and fairly basic aspects of the tax code here in the US. If you don’t know that stuff — if you basically wander through your life thinking the government taxes all of your income based on the highest possible percentage — then I suppose it’s no wonder you freak out. But it also kind of makes you the financial equivalent of the people who think that Darwin said we are all descended from monkeys, or that the Bible says “God helps those who help themselves.” In short, it means you’re a bit ignorant. You should stop being that. It’s easily correctable. In any event, at some point in time, real live grown-ups should understand the concept of marginal rates. It’s not that difficult to grasp. There!”).. Getting back to the real word for a bit, I’ll be the first to admit that while understanding the basics of the US tax code is useful for not irrationally freaking out when there is talk of raising the marginal rates of the top few percent of income earners in the United States, in point of fact, unless all one is doing is filling in a 1040 A or EZ form, on a practical level the US Tax Code quickly becomes too complicated for most people to deal with, especially when the only time they deal with it is between April 10 and April 15 every year. This is why probably the single most important thing you can do for yourself financially, the moment your tax profile outgrows the 1040 A or EZ, is to get yourself an accountant. Because it’s the accountant’s job to know the tax code — not just a half a week a year but all year long. In the now-long-gone blog entry of Professor Todd Henderson’s that started off this entire recent round of income-related nonsensery, the one thing in it that actually gave me pause — and which convinced me the man was something of a fiscal naif — was when he revealed that a) he didn’t have an accountant and b) that he was still using TurboTax for his taxes. And I was all, like, what? Dude, you can pay for a gardener but then cry that paying for an accountant is too dear? No wonder you’re all worked up. I very specifically don’t want to start another round of Henderson-whacking — the man’s been whacked enough — but I will say that after a certain level rather below Professor Henderson’s income and taxation situation, you should recognize that what you don’t know about the US tax code is probably making you pay more than you have to and/or making you miss something you shouldn’t. Which will come back to bite you in the ass in the form of an audit, followed by late payment penalties and fees. My own moment of clarity on this score came in 2001, when we moved to Ohio; we became landlords and I also started my own company. Both of these things, and other financial events, caused me to look at my tax profile and go, oh, man, I am so very over my head right now. Bear in mind that I said this when I had written a book on finance, and when I was currently writing a finance newsletter for AOL, and also working as a consultant for a number of financial services companies. I was not exactly innumerate. But then maybe that was the thing: I knew enough to know I didn’t know nearly enough. So we got ourselves an accountant, and she was (and is) very good at what she does, and her competence at her job means our tax situation is both well-managed and never a surprise. So. If you’re freaked out about taxes, please make sure you actually know what you’re talking about when it comes to taxes. If you are a high-income earner and/or have a complicated tax profile, invest in an accountant. Either or both should help to calm your tax frenzy a bit. And if they don’t, accept that the reason you’re in a frenzy is probably because you want to be, rather than because the situation genuinely warrants it. 236 thoughts on “Tax Frenzies and How to Hose Them Down” Ayn Rand’s ghost is gonna be pissed. I’m already on the ghost’s shit list for the Ayn Rand Christmas Special. Ha Ha. I have the urge to start a nerdly volunteer group, helping the underprivileged, called the Ayn Rand Shit List. Probably would have to officially be called something more tactful, but the vision would remain. Or just call it “Annoy Ayn Rand Day.” I’m consistently amused by the folks on message boards who keep saying things like “It’s nice to spend other people’s money. But we’re not going to put up with it for much longer.” and then throw some Rand line out there. My response is invariably. “Oh. Are you going to go Galt? Awesome! Could you do that, like, right now? Please?” Maybe once all the hedge fund managers, tv personalities, trust fundies, and lobbyists go to ground in their libertarian utopia (and start arguing about who is, and who isn’t above having to scrub the toilets and cook the food) the rest of us can get back to work. For some reason the “tax frenzies,” as you call them, have no problem with the fact that our society sends men with guns to the doorsteps of the people who don’t pay their mortgages or their child support. They want men with guns to enforce those sorts of contracts, while they profess abhorrence of the use of men with guns to enforce the contract they have with the political sovereignty to pay for the services it provides them. .” This sentence just made my day. Literally. I’m going to be giggling until this evening. Also, just so I have it out there before there’s a gripe about it, I don’t believe everyone who wants to argue tax rates are too high thinks “Taxes = Theft.” Indeed, I prefer it when they don’t. That thing about people who don’t think taxation is theft reminds me of something I saw on television in Iowa back in the early nineties, when farms were failing. A local TV station was interviewing people on the street about the situation. Two guys from out of state were asked if they were worried about farms failing. “No, not really.” “If farms fail, where will you get your food?” With a snort of laughter the guy said, “At the store!” I (heart) my CPA. > I know you all thought you were going to be one of those paying a nickel for your cigarettes in Galt Gulch. See also The marginal tax rates cause a lot of problems. Consider my friends, who live in San Jose. The guy makes $70,000 a year. His wife could make the same. They have a kid, and are. Would you work full time for $10,000 in cash? I wouldn’t. She won’t. And if the marginal tax rate on these “fat cats” increases, the decision becomes very easy to not work. Hmmm… “Anthony & the Cambodians (gardeners)” vs. The Accountant. Tough question. When Anthony and his merry men descend locust-like on my lawn and leaves it spotless, and I don’t have to spend the next few hours in misery (I have pollen allergy), it would be the gardener. On the other hand, when I fill out the form and send it with a large pack to receipts to my accountant every Feb., the accountant sounds best (I’m not allergic to anything in Feb.) Tough decision. The accountant is actually cheaper than the gardeners per annum. But I value not being in misery for dozens of hours each summer really highly. How much is a Darth Vader respirator and a lawn mower amortized over 3 years? I could do my own taxes if I really wanted to. How much harder than the tensor form of the Navier-Stokes equations can they be? Tough choice. Regards, Jack Tingle Who is not (by the Obama index) rich, but is well-off. I never noticed it before, but Scalzi, you’re very good at making straw man arguments. Bill, a couple making a combined $140k a year won’t be paying the higher tax rate. Also, they’re making more than enough money, if they’re paying on a home and have kids, etc., to justify hiring a CPA to find all those great little loopholes. No _way_ she’s only going to be bringing home $10k after taxes and daycare. But your numbers sure do SOUND scary. Congratulations, I think you’re ready to be a FOX News ‘commentator’. Ayn Rand. RIght theory, wrong universe. Consider this. Ralph Nader once put out that we should tax the activities we don’t like (like cigarette smoking) in order to discourage such activity. With the progressive tax system we are basically doing that: taxing and penalizing people for being industrious and working hard. No wonder we’re in a recession with a party that considers hard work a sin. Scorpius: I understand that you’ll feel better if you think they are strawman arguments. And considering the silliness you just barfed up in the comment just before this one, it’s clear you know all about that sort of argument. Bill “Child care at the place they’d want to send their kid to is $2,000/month” I’m glad we’ve agreed to throw reason to the wind hear and trot out the scariest bullshit we can think of on short notice. Do they have diamond encrusted baby bottles? Seriously, good work. Wow. “Objectivist Jerky.” Truly Mr. Scalzi, on the battlefield of wits, you are armed with the M-29 Davey Crockett Weapon System. Why don’t we just go to the Fair Tax system? That way we can get accountants doing something productive instead of a shell game. JR Kincaid: I’m sure we all have our favorite schemes to make taxes “fair” — and the “Fair Tax” in my opinion is one of the dumbest — but this isn’t the thread to discuss them. It’s not on point. What’s on point: discussing the tax system as it is and people’s knowledge and understanding therein. Bill: Leaving aside that personal expenses are not tax expenses, so factoring them into the discussion here is the equivalent of slipping cards into the deck, try making an argument that doesn’t posit paying for preschool $2,000 less a year than the average cost of annual tuition at a private college. “No wonder we’re in a recession with a party that considers hard work a sin.” I think that’s just about my favorite thing about the Rush Limbaugh ditto-head personality type. They all think they know what hard work is. Here’s a hint. Hardwork doesn’t net a you a billion dollars for bellowing into a microphone. Hard work doesn’t allow you to get up to 300+ lbs, because its Hard Work. Hard Work doesn’t involve desks, air conditioning, or Casual Friday. What you are talking about is the “Right” work. As in went to the Right schools, knows the Right people, has the Right degree to get the Right job. Largely, they have little or nothing to do with one another. DH and I started paying our tax accountant when DH started his own business about 12 years ago. It is more than worth it to NOT have to worry about an audit 5 years after the fact; we just went through that hoo-haw about 2 months ago. We forwarded the form letter about back taxes to our accountant, who sent the information our tax office had lost/overlooked to the tax office AGAIN. Mischief managed, and we are profoundly happy that we had someone on the payroll who could take care of it promptly and neatly. BTW, we are pleased to pay for clean water, firefighters, roads that aren’t dirt trails, and all other mod. cons. via our taxes. Plus a bit for people whose misfortunes aren’t their doing. I’ve been hiring out my taxes ever since I screwed up the childcare tax credit and had to pay a penalty. And that was before I was running my own freelance writing business. I have some freelance writer friends that do their own taxes and I just have to shake my head in disbelief every year when they mention it. I’m a fairly smart guy in a lot of areas, but figuring out each year’s shifting tax code and how to deal with deductions and estimated tax payments, etc., and frankly, my own feeling that the tax code is sort of unfriendly to small business owners, just gives me a headache. And I’m willing to pay H&R Block the fee each year that will have them sitting down with the IRS and me should we get audited. PITA, indeed. I think it was Mark Twain who said, “Go do some traveling, see how the world actually is, and you’ll sound less like a dumbass when talking about stuff.” I could be paraphrasing. With that in mind, it would be interesting to compare the percentages of people who are for and against raising taxes as Obama has suggested, and that have traveled extensively in impoverished areas throughout the world. It just blows my mind that so many people think they are poor in America. Even most of the homeless guys I work with know they have it better than people in many other countries. How disingenuous to say those daycare numbers are high. 4 years ago, when I left work to stay home with my second child, daycare for infants in Westchester county New York was $1850. I assume the cost is at least the same now. With a second child, my professional life in finance (although not hedge funds) as not worth working 12+ hour days to bring home less than a nanny would have made, if I could have afforded one after tax. That’s why we moved to Germany, where a single income supports a family. Read Elizabeth Warren’s book, The Two-Income Trap, to see exactly how that worked. We don’t all have jobs that let us move to the midwest. On the other hand, our taxes are substantially higher in German, substantially… and I am fine with the society that my tax dollars bring me (and before you ask, we file a tax return in the US- our German taxes more than offset the merican ones, so I am sure that our rate here is 19% higher than there). I have an accountant, but even if you do have an accountant, you should go through the return carefully and double check their work. I have caught a couple of important mistakes in my return over the years–not because my accountant is incompetent, but because I know my financial situation better than he does. If he makes a mistake, I’m still the one that has to pay the penalty, which means it’s still a pretty good idea to understand how taxes work and what all the forms in your return are about. While now teaching HS English I am still a CPA and do my own taxes annually, save for one year–the year of my divorce in 2002. Even though still practicing accounting full time back then, I knew that I was not up to speed on the divorce laws and their interplay with the tax code and regulations. So, I a CPA, hired out my tax return preparation to another local CPA. And frankly, a lot of CPAs in public accounting always hire out their own returns to their partner CPAs so that a fresh set of CPA eyes look at their own numbers. Amen, brother John, amen. Anyone who needs to prepare their return on the actual Form 1040, instead of the watered down versions, needs to have a professional third party preparer do the deed. GinBerlin: “How disingenuous to say those daycare numbers are high.” It’s not disingenuous at all. They are high. Here’s a data sheet for the costs of child care in California; here’s one for the cost of child care in New York. They’re from the National Association of Child Care Resource and Referral Agencies. In both the case Bill notes and the case you note, the amount is substantially above the average. However, again, child care costs have nothing to do with the tax discussion here in the US, except to the extent one wishes to discuss the Child and Dependent Care Credit — which will lower one’s overall tax burden — so let’s not get ourselves off on a tangent here. When the day arrives, will you tape a thin strip of Objectivist Jerky to Ghlaghghee? Mmmm….Objectivist Jerky. The other other white meat. I sincerely wish I made enough money to have to worry about marginal tax rates. Until that time I will have to live with a purely theoretical understanding of the subject. I mean, up here in Canada we have a tax regime that is so much simpler . . . Day care in the DC area for infants is competitive at around $12k a year. Bill – FYI for your friends, day care expenses are tax deductible. Up to a point, not sure what its ceiling is. Certainly lower than 24k a year. So, not only is a portion of that daycare deductible, but their marginal rate only goes up for the income earned above the bracket. So, for the example of the 250k+ club, only TAXABLE dollars that they earn above 250k pay the increased taxrate. Your friends won’t be effected by that tax hike. If your argument in general is that marginal tax rates are a disincentive to work, you might as well go ahead and argue against taxes period. Because, same difference. Taxes are not a disincentive. Tax rates can be prohibitive, but taxes are a cost of doing business. There will be no positive economic change in this country unless there is a bi-partisan effort to drastically reduce SPENDING, particularly Defense spending. If Barney Frank and Ron Paul can come together on this one, why can’t we all?!! Sometimes I think that all of the animus centered around who should or shouldn’t get tax breaks is purposefully fueled by politicians (from both parties) in an effort to distract us from the ridiculous amount of deficit spending that is crippling any chance for economic recovery. Jason: Spending reduction: not on topic. If there’s ever a sequel to Atlas Shrugged in which Objectivist Jerky is featured, I’ll be sure to buy a copy. Hint, hint. I don’t think it’s tangential, except in the highest levels. Certainly at my income, it was a reason to stop working and I am an accountant and know how to use tax credits: it wasn’t taxes paid, it was cost of childcare. The state of NY is large, as is the state of Illinois: I will guess that the people living outside metro areas pay substantially less than that average, just as the median household income in metro areas is substantially higher. I still don’t think that’s a reason to whine about anything other than a government that doesn’t use tax dollars to provide adequately staffed and maintained daycare, as the government of the country that I now live in does. Isn’t this thread about whether taxes are worth whining about or not, or have I lost track? I’m all for higher taxes and better services (childcare, family, infrastructure, welfare support, job training) myself. maybe you can help me out and write an entry on the “Spending Elephant” in all of our rooms, then GinBerlin: “I don’t think it’s tangential, except in the highest levels.” And yet, it is, because the discussion is whether people understand the tax code (and/or believe taxation is theft), not whether expenses unrelated to taxation motivate people to work less or more. Your second graph is more on topic, however. Perhaps relevant to this discussion, at least in the “after reading this post and laughing about ‘Going Galt’ seeing this kept me laughing” way: Still chortling, here. And um, I think if a couple is grossing $140,000 and can’t make childcare work on that income, something is wrong with their math, and they really do need an accountant. Even in the Bay Area. Maybe *especially* in the Bay Area. Perhaps they bought more house than they could actually afford? Also: “Would you work full time for $10,000 in cash?” No, but I’d work full-time to cover all my necessary expenses like housing, transport, food & water, child care, household maintenance, health & dental care and yes, also my taxes, and have $10,000 disposable income left over. Because that’s what you’re actually saying, isn’t it. After spending her share of the bills and paying her taxes, she’s got $10,000 left over. I should think pretty much anyone would LOVE to have $10,000 every year with which to do whatever they want. I sure would! By the way, if they both make the same $70,000, doesn’t that mean that after splitting all the bills and taxes and childcare, the family has $20,000 left over? Wow, that’s great! John, There are disincentives built into the tax system that penalizes several categories of people and discourages people from investing their income into more productive businesses, which in turn if the disincentives were not there or reduced, would create more jobs for other people. There is the marriage penalty; my sister makes about 30K a year in her job, if she were single, she would pay about 10-15 percent of her income in local,state and federal taxes. Being married with 3 kids, she and her husband now pays about 25 percent in local, state and federal taxes. They have a higher income, but the disincentive is still there. My mom lives on the 401K my dad spent his whole working life making sure it was well funded, she gets taxed twice by the feds, once because she gets dividends from the investments in the 401K and once again when she files her personal income taxes. She has the best tax accountant around who does his best to minimize her tax burden, but the disincentive to invest is smacking her in the face every time she gets her quarterly 401K statement. In NY state, it is even worse because the state does not allow dividends from investment income to be deducted from state taxes unless it is an officially approved NY state bond. And she doesn’t invest in NY state bonds because they pay bubkus for a dividend. So there is that disincentive to invest on a state level. In my own case, my tax accountant has warned me that I can expect to pay NY and Vermont state taxes even though I only lived and worked in NY state for 15 days in 2010, having taken a job in Vermont. NY State figures it would cost me more in time and money to contest their erronous tax bill than to just grit my teeth and pay up. They are right. I dont make tons and tons of money, I work for the feds and make about 45K a year. It is crap like this and other disincentives that make me steaming mad about this so-called “progressive” tax system. A flatter tax system would actually, in my view, encourage people to keep more of their own money and invest it in productive enterprises of their own choosing, instead of having Uncle Sam taking more and more every year, wasting 1/3rd of it on bureacracy and pet funding schemes and earmarks. A flatter tax system would also do 2 other useful things; 1) without all the tax breaks and loopholes written into the system by the wealthy that can afford lobbyists and lawyers, the rich would pay more in actual dollars and 2) the rich would actually save money not having to pay said lobbyists and lawyers to game the system for them. Thank you so much for this Scalz, since the first thing I thought when I read that too is that one of them thar fancy accountants could probably shave enough off of his taxable income in order to push him into the lower tax bracket if he’s really that close to the line. (I bet private school is tax deductible…) I have an income in that magical six figure territory, though only just barely. I do my taxes every year with Turbo Tax. Yes, a very high earner should use an accountant. Likewise for someone who has a complicated return (like anyone self employed). But, if you have a more basic situation, like most people who have one or two employers in a household, the computerized tax programs can do a great job. I dare anyone to find any deduction the program has missed in my returns. The only thing slightly complicated is a couple thousand a year for consulting and the expenses are all obvious. Where are all of these “loopholes” everyone talks about? There really aren’t any. We take the standard credits every year. We’re getting the education tax credit for our son in college and this year the energy tax credits. We itemize our mortgage interest and state/local taxes. Yes, the return seems complicated, but we just answer the questions and Turbo Tax fills it all in for us. I can read over what it has produced. We have never been audited because there’s really nothing to be audited about. I always tell the story of a friend of mine who took his complicated tax return to an accountant one year and the accountant managed to save him $800. He was very pleased. Of course, the accountant charged him, coincidentally, $800. I hope people realize what many accountants do with your tax return: they use a program not unlike Turbo Tax. In general, an accountant can be a good choice, but saying anyone over $X per year should use one is very much an oversimplification. Off topic, of course. It’s been at least 15 years since I saw/heard anyone use “flensing” in a sentence. Bill@11: I find those numbers amusing, because my wife and I had that exact situation. She makes about $70k. I live in the SF East Bay. However… $10k/year is utter and complete bullshit. Again, speaking as someone whose wife stayed home and then decided to go back to work at about a $70k salary… 1) Preschool is less than $1k/month (Even now, the private school my son goes to is only running us $890/month) 1.1) Day care is also tax deductible. 2) I make substantially more than that guy did and we are thus much higher up the tax ladder, yet taxes aren’t even close to what you imply. Speaking, again, as someone in almost exactly that situation, your numbers are obviously completely made up. Having been in that *EXACT* situation, we found that her job meant about $30k/year net. (Actually, more, because that’s after maxing out the 401k on her salary.) There are certainly “mommy-trap” situations, but they generally involve entirely unskilled labor, where only minimum wage is involved, not the sort of skilled labor that brings in $70k. On another topic, one thing that many people are completely unclear on are two ways the tax code entirely favors the upper middle class: 1) FICA tops out at around $107k. What this means is that in real terms, unless you pretend that FICA is not a tax, someone making $120k/yr pays a lower percentage of their income in taxes than someone making $90k/yr. 2) By far the biggest deduction for individuals is the mortgage interest deduction. Renters generally get nothing like it. It should be pretty obvious that poor people aren’t likely to be able to use it. That tax code punishes renters to the benefit of owners, and this thus essentially punishes the poor to the benefit of the middle and upper middle classes. Christopher Schaffer: 1. Paragraphs are your friends. 2. You apparently missed the note I left upthread mentioning that trotting out our favorite alternate systems of taxation is not on point to the thread. Charles: “I hope people realize what many accountants do with your tax return: they use a program not unlike Turbo Tax.” This is a little like saying since you can buy the same surgical equipment as a doctor, you can do your own surgery out of a book. You’re paying for the expertise, not the tools. @Bill so you are saying this person could make ten grand a year cash after expenses and doesn’t consider it worth it? Well fuck me sideways with a cucumber, but if you can give me the name of her prospective employer I’ll start Tuesday morning bright and early and never bitch about money again. I make only a couple of K more than her predicted pocket-money per year (I think about 13-15USD depending on the bouncing about of the exchange rate from GBP) before expenses & tax so you’ll pardon me for being a bit lacking in sympathy there. – As to why rich people panic because they don’t understand taxes, it is because they are rich. By and large the centre middle class upwards have enough money swilling around that they don’t know the real value of true money. They, generally, have never had to account for every single last penny, and do a bit of robbing Peter to pay Paul on top of it. The well off panic about money because it is an abstract to them. “Ignorance is curable. Stupidity isn’t.” Some math: At $70k/yr, you are in the 25% bracket, which runs to $82,400/year. The next bracket is 28%. So the federal taxes on a “new” $70k/yr job are:: 25% * $12,400 = $3100 28% * $57,600 = $16128 Total fed taxes = $19,228 In California, one bracket runs from $47k to $1 million, and it is taxed at 9.3 percent, so state taxes are: 9.3% * $70,000 = $6510 And of course, there is FICA at 7.65% 7.65% * $70,000 = $5355 The grand total is $31,093 in taxes leaving a take home pay of $38,907 per year. If you assume the gold plated preschool, yes this works out to about $15k per year. But if you are looking at taxes of $31,093 and day care expenses of $24,000, I’m not sure you can really blame the government. Note that this is only if you’ve made no effort to find tax deductions. For instance, there’s a $3k deduction for child care. (Admittedly a drop in the bucket with even reasonably priced day care.) Also note that, according to wiki, the median individual salary for women in San Jose is $36,936 before taxes. Found a decent & reasonable response to the “Taxation = Theft” crowd entitled Paying for what you use up CrypticMirror: What is the real value of true money? How much true money does it take to make a real value? I’d say the great thing about capitalism is that the money one has is real whether or not one has a metric shit ton of it or not. And I’m not all that receptive to the idea of the rich being the ignorant ones. Lack of knowledge about taxes seems to be a pretty equal opportunity ignorance. Same with money. It takes all types. There are people who take an interest and learn money and its managent. And there are people who don’t. And then there are people who don’t know they don’t know and speak extensively about the subject on the internets. Mind, this isn’t to say I particularly disagree with your first and second paragraphs there. I don’t think Bill’s numbers add up. But, it sounds like second hand information and analysis. Having followed your posts on the professor’s article, I think you missed the point (perhaps deliberately.) I have also noticed that you tend to tar everyone who is concerned about high taxes and government spending as an Ayn Rand fanatic. While this may score brownie points among the Scalzi Peanut Gallery, it is intellectually lazy—not to mention inaccurate. Ayn Rand devotees comprise a small subset of Americans who are concerned about the growth of government spending. When you bring out this straw man, you sound a bit like the right-wing commentators who want to cast every liberal Democrat as a closet Trotskyite. Henderson made two points, which are quite reasonable: 1.) Government-sponsored social spending usually triggers the law of unintended consequences, and 2.) The cost of government is rising, and someone has to pay for it. A desire for small government and lower taxes does not necessarily make one an Objectivist or a radical libertarian. I don’t make $250K per year, but I dislike seeing so much of my tax money going to foreign wars, foreign aid, Wall Street bailouts, and yes, welfare. To cite one concrete example, 40% of the births in the U.S. are now out-of-wedlock, and many of these receive government aid (i.e., tax dollars). This trend is guaranteed to increase our tax burden. Does one become a radical libertarian if they argue that government aid should be cut off at the second out-of-wedlock child? Why should I–or you, for that matter—have to underwrite flagrantly irresponsible behavior? The recent mortgage bailouts were another example of government spending that frankly made me angry, without reading a single chapter of Atlas Shrugged. I waited until I was in my 30s to purchase a home, and I bought something modest with a decade’s worth of savings. Why should I have to underwrite homeowners who purchased homes they could not afford? Likewise, I wasn’t happy when President Obama paid back his political debts to the UAW by bailing out General Motors. (UAW workers already make more than the average taxpayer.) The President and Congress effectively forced us to subsidize the UAW’s support of the Democratic Party. This has nothing to do with Ayn Rand, radical Libertarianism, or a fanciful desire to pay zero taxes. Congress and the President are clearly wasting a large percentage of our tax dollars, largely to secure their own future elections. Why do you think it is unreasonable for Professor Henderson (or anyone else, for that matter) to remind the government who ultimately pays the bills? Apologies for vomiting my thoughts out there; my point is people of all economic classes complain about taxation because the politicians love to make the tax system as complicated as possible. Their justification is to make it “fair”, according to whatever subjective political/economic theory they fell in love with. It also allows them to bribe the electorate with their own money and other peoples’ money in order to get elected. People complain about the level of taxation because above a certain income level, their percentage of tax paid increases (case in point, when my fiancee and I get married and file our 2011 taxes, our federal income tax will jump to 25-30 percent together from 15 percent filing seperately) and the government services they receive from tax dollars will go down. More to the point, even though higher income brackets don’t need the services provided to lower income tax brackets, I am tired of the waste and mismanagement of our tax dollars that is spent more on government overhead and government earmarks. I am also tired of hearing bloviating Congresscritters like Barney Frank tell the American people that the rich don’t pay their fair share when the high end of the income tax bracket pays 33% of all federal taxes. I’m all for having government help those in need when they need it, I am less enthused about politicians and pundits invoking class warfare rhetoric and envy of the better off to justify their sloppy handling of the economy and domestic spending and to grab more money to shovel into domestic spending and stimulus when they can’t manage the damm programs properly in the first place. Off topic, I’m aware but I think the level of taxation and the mismanagement of federal spending are very intertwined. Source: Real trouble for the somewhat rich person who thinks they’re poor – the dreaded AMT. Getting back semi on topic. As someone with libertarian leanings, I sympathetic to the tax = theft argument. However since I live in a world where useful services like fire depts, police, roads, etc are paid for out of taxes, in practice the idea is lunacy. Todd “Why should I have to underwrite homeowners who purchased homes they could not afford?” Because you care about the value of your house. Also, dude, you should totally read that book. You’ll dig it. Hard. Interesting post and comments. I’m (mostly) with Charles @44; IMHO, John’s oversimplified things re. hiring an accountant. No doubt some people need an accountant but don’t use one; I bet the reverse is true, too. But graduating to the full 1040 doesn’t mean you automagically need an accountant. My taxes are pretty simple (and not because I’m overlooking a bunch of things ;-) and while I may need other things, an accountant isn’t one of them. At least, not yet. ;-) Bill @11, even if that $10k number is true (and I doubt it), there are other reasons it would be financially beneficial for the wife to work, e.g., she’d be earning social security benefits. Todd: “I have also noticed that you tend to tar everyone who is concerned about high taxes and government spending as an Ayn Rand fanatic.” You missed comment #7, which means the rest of your comment is a misapplied lecture. Try again. @OtherBill the real value of money is the amount you have to pay to avoid living under a bridge and digging through rubbish bins for food. Once you get above having to choose between rent or food, then money is an abstract and you’ve got it to spare. Real money is the stuff you don’t spend on little (or less than little) luxuries. Real money is the amount you have to spend. Abstract money is the amount you don’t have to spend but like to think it is. Other Bill@56: “Because you care about the value of your house.” Actually, the government’s meddling in the housing market was one of the factors that led to the housing crash in the first place. Rather than letting the market decide who should be a homeowner, the Federal government bullied banks into underwriting low-income mortgages for years. (To be fair, this policy was continued under both Democratic and Republican administrations.) As for the Obama Administration’s current efforts to “fix” the problem by throwing good money after bad, the results have not been promising so far. Government attempts to manipulate markets almost always result in bubbles or shortages, as any economics professor will tell you. Like Ronald Reagan said, the most frightening words in the English language are: “I’m from the government, and I’m here to help.” As for reading that book (I assume you mean Atlas Shrugged), I actually did read it about 20 years ago. Ayn Rand’s novels were never really my thing. John: “This is a little like saying since you can buy the same surgical equipment as a doctor, you can do your own surgery out of a book. You’re paying for the expertise, not the tools.” It’s a little like it, but it’s a lot unlike it. Turbotax is a computer program that analyzes your tax situation and figures out how much tax you owe (or have overpaid), files the returns with the IRS for you, and provides you with documentation of the process. The expertise is built into the program, and you don’t have to do the calculations (e.g., subtract line 32 from line 24 or 26, whichever is greater). Yes, I’m sure you know this. But paying an accountant $800 to get you an $800 return when TurboTax would get you the same return for $50 is like going to the doctor to because you cut yourself shaving. If you understand your own finances well enough to enter the numbers into a computer program when it prompts you, then an accountant is not necessary for you. Todd: Also, if you want to post re: Professor Henderson’s points, this is the right thread for it, not this one. Kevin B: “If you understand your own finances well enough to enter the numbers into a computer program when it prompts you, then an accountant is not necessary for you.” I hope you feel the same way if you’re ever audited. Beyond that, if you want to use Turbo Tax, you go right ahead. I don’t recommend it for anyone with a complex tax situation, myself. My own experience and the experience of others I know is that it’s easy to miss things using it if your tax situation is not a simple one. John @59 “You missed comment #7, which means the rest of your comment is a misapplied lecture. Try again.” Sorry, I based my comment on your multiple posts on Henderson’s essay (which did involve numerous Ayn Rand metaphors), and failed to read your strategic backpedal in comment#7. My bad. No worries, Todd. Although I would not characterize it as a backpedal, rather a clarification. Actually, the government’s meddling in the housing market was one of the factors that led to the housing crash in the first place. In the same way as describing someone with a cold who was also run over by a bus as having their virus be “one of the factors” in their death, sure. The bubble of 2005-2008 when it was most extreme was driven largely by private money. Rather than letting the market decide who should be a homeowner, the Federal government bullied banks into underwriting low-income mortgages for years. The “market” was doing exciting things like not lending money to African-Americans, so, gee, no, I think I’d rather the “market” not decide who gets to be a homeowner, thanks. Todd @61, the banks didn’t issue subprime loans because the government bullied them into it. They issued them because they were hugely profitable. They were so profitable that banks often tried to steer even consumers with good credit, who qualified for traditional loans, into such subprime loans. re: someone else’s comment about the accountant saving them $800, and then charging them $800, hey! they saved money! After all, TurboTax would have cost $50 :) John: “Beyond that, if you want to use Turbo Tax, you go right ahead. I don’t recommend it for anyone with a complex tax situation, myself.” Thank you. I don’t either, and didn’t in my post. I only recommend it for those people like me who understand their own financial situations fairly well, where an accountant would be overkill. If I am ever audited, I will feel differently. I will hire an accountant, or possibly a tax lawyer at that time. Re: TurboTax and its kin: Honestly John, your disagreement with Charles depends on what level of accountant you’re talking about. If people think “going to H&R Block” when you say accountant, then they are not really getting much value over H&R Block’s own TaxCut software. I had a (more than usually) complicated tax situation last year, involving an estate and property over and above my norm, and I figured H&R Block would save me time and hassle as well as maybe some money. Instead it turns out the guy wasn’t able to handle the estate tax return anyway (I got a referral to their “Premium” service, and my attorney–someone who DID provide value for his fees–noted that he could do it as a very simple estate for a far fraction of what they would have charged me) AND I ended up having to file an amended return for the first time in my life because he (his software actually) screwed up some things that I only caught because I was double checking with…TaxCut. So the point being that if your income and its complexities justify an independent CPA as a tax accountant, then I agree with you, but if your income just happens to be in the low six figures WITHOUT a lot of complexities, it’s quite reasonable to say that TaxCut or TurboTax are just as good as the “tax professional” in the strip mall down the way, and doesn’t necessarily justify the step up to an independent CPA. I’m pretty sure that’s what Charles’ point was meant to be. My husband works in the tax field and would never do our taxes. If you own a business, own rental property, are self-employed, etc., an accountant is worth the money. $2,000 a month for childcare? Huh? I don’t even pay that in LA. Must be a nanny situation? Lots of preschools out there that charge less that are wonderful. I didn’t work for a while because I was only making around $27,000 a year before taxes and childcare would have eaten up most of my income. Luckily I could be a stay-at-home mom for that time. David@66: “The bubble of 2005-2008 when it was most extreme was driven largely by private money.” “Private money” in this case involved the securitizing of mortgages; and the securitization of debt (i.e., selling debt as a security in a secondary market) is a long-established practice. It usually works, as long as the debt is legitimate in the first place. What happened in this case was that investment banks securitized mortgage loans that should never have occurred in the first place, many of which were instigated by the Federal government pressuring lenders. Certainly private money was involved. Government manipulation of markets usually does create opportunities for unscrupulous individuals. Once again, the law of unintended consequences. As for the “market” not lending to African-Americans: I knew that the Race Card was coming, because it is the tool of last resort whenever one enters a venue dominated by leftwing individuals. When all else fails, say: “Yeah, dude, but it’s racist!” I was unaware that the loan officer’s handbook at Bank of America contains a clause saying: “If any African-American loan applicants enter the bank, don a white sheet and chase them out immediately.” (Perhaps you could find one posted at the Daily Kos.) Banks are motivated to lend to any borrower who looks like a good credit risk. If an African-American applicant looks like a good credit risk, he or she will get a loan. I am quite sure that my family doctor (who happens to be African-American) has no trouble securing credit, even in this economy. And he doesn’t need an army of sanctimonious Washington bureaucrats to do it. John@63: “My own experience and the experience of others I know is that it’s easy to miss things using it if your tax situation is not a simple one.” [Speaking about tax software.] I agree. I use TurboTax (I don’t have a terribly complex tax situation any more) but I always cringe a little bit when it asks something like, “Do you have any other taxable income?” It reminds me there are things I don’t know that I don’t know. Things that can be really inconvenient later. @Todd, Lots of, frankly, sloppy reasoning in your post but this bit leaped out and practically throttled me… I wasn’t happy when President Obama paid back his political debts to the UAW by bailing out General Motors What, pray tell, was your alternative, UAW or not. I’ve seen the cost of having GM fail completely is somewhere North of $100M – that’s the first one that came up on Google but there are a *lot* and they’re all a lot more in terms of impact on the total economy, plus unemployment payments than the money actually spent which, if memory serves, is being paid back with interest. Bit like that hienous bank bail out that’s a) in profit and b) probably saved western civilisation from the total collapse of the modern banking system. By all means worry about expenditure. I suggest you start with the US military budget – here’s a thought, get rid of one of the strategic nuclear defense lines? Submarines, missiles AND bombers? Pick two. Oh, and you can probably get by with a few less carrier fleets. Then come back to me about unwed mothers and healthcare. Besides, the overall tax burden on the American tax payer is still WAYYYYYYYYY too low. What happened in this case was that investment banks securitized mortgage loans that should never have occurred in the first place, many of which were instigated by the Federal government pressuring lenders. No, they weren’t. As someone above has pointed out, banks went into subprime mortgages because they were profitable, not because of government pressure. I was unaware that the loan officer’s handbook at Bank of America contains a clause saying: “If any African-American loan applicants enter the bank, don a white sheet and chase them out immediately.” Then perhaps you should pay better attention to the world around you. Try looking up “redlining” and you’ll have a bit of your ignorance remedied. Banks are motivated to lend to any borrower who looks like a good credit risk Banks are made up of people and those people are driven by all the motivations–good and bad–that people have. By your logic, segregated lunch counters could never have existed because, after all, restaurants are businesses and businesses will sell to anyone, as long their money is good. Right? @Todd, Actually, the government’s meddling in the housing market was one of the factors that led to the housing crash in the first place. Rather than letting the market decide who should be a homeowner, the Federal government bullied banks into underwriting low-income mortgages for years. Go away and come back when you’ve done some research on this. The default rate on the under written sub-prime mortgages that were issued to people in the early stages have had pretty much the same historical default rate as more traditional mortgages. If the CRA loans had been the only problem, there wouldn’t have been a banking crisis. You might also want to look up where the real housing repossession crisis’s are happening, and it’s generally speaking not in CRA heavy areas. Unless they were lending money to people to buy second homes in Miami too… Daveon@73: The biggest problem with the GM bailout is that the government is forcing us to underwrite a business model that is likely to fail in the long run anyway. Companies have failed numerous times throughout U.S. history. That is the way the market works, and it is a good thing: because capital then gets reallocated to more efficient uses. If you have ever been in a UAW plant (and I have), you will have no trouble discerning why these facilities are inefficient. Since you obviously aren’t familiar with the auto industry, here is a sample, from a politically neutral source, USAToday: ***** “Besides, the overall tax burden on the American tax payer is still WAYYYYYYYYY too low.” This tells me that you are still too young to have paid many taxes. All I can say here is, “you’ll understand when you’re older.” @Todd, This tells me that you are still too young to have paid many taxes. All I can say here is, “you’ll understand when you’re older.” 40+ old enough for you? And I’ll raise you this: I moved to the USA 2 years ago from the UK where I was paying 40% on all income over $55,000(ish) – and it wasn’t tax that made me move, it was getting tired of flying 250,000+ miles a year for business. Anyway, you dodged the question – what would you have done about the cost of letting GM fail? Let’s try to reel in the personal snipeage, folks. @Todd – while you’re learning about redlining also take a look at the HMDA data through the years. As for the reasons behind the housing crash try reading Paul Muolo’s excellent book Chain of Blame: How Wall Street Caused the Mortgage and Credit Crisis. (Disclaimer, I copyedit and post his column What We’re Hearing that runs on the National Mortgage News website.) Having to use specialized software just to do taxes is bad enough. The fact that the tax code is so complicated that people must pay experts just to avoid breaking the law by underpaying their taxes (risking fines and jail time) is a sign that the tax code is in desperate need of revision and simplification – not just in terms of the wildly differing rates of taxation for different sources of income, but all the deductions, exemptions, “credits,” and other loopholes that keep getting shoehorned in. Re: Georgiana’s post: Michael Lewis’ The Big Short is also an excellent book on this subject. Daveon@77 “40+ old enough for you? I moved to the USA 2 years ago from the UK where I was paying 40% on all income over $55,000(ish)” Sorry…Few Americans who are old enough to pay taxes would assert that our taxes are “WAYYY too low”. Americans take a different view on taxes. (You’ll remember that little tiff we had over the Stamp Act a few years ago.) You’ve had to suffer the effects of a century of Labour. You’re likely grateful if the government leaves you anything at the end of the year. Compared to what you’re used to, our taxes probably do seem WAYY too low. Anyway, I meant no offense on this point. ***** “Anyway, you dodged the question – what would you have done about the cost of letting GM fail?” Your question presupposes that the government has an obligation to pick winners and losers in the marketplace. GM already *has* failed. Any corporation that cannot stay afloat without an an infusion of government money is a failure by definition. But anyway, what *should* the government do? 1.) Accept a breakup or rapid scale-down of General Motors. 2.) Facilitate a sale of GM business units to more competent owners. 3.) Outlaw the closed union shop. What the Obama Administration has done is simply to delay the inevitable. Mark my words, GM will be back for more taxpayer money within a few years. Todd: Wait, so you had read the book? And found twenty years later you still largely agree with the ideology it advocates? And you clearly enjoy the pejorative labeling of people. I’m sure mr. too young and mr. snap-played-ever-predictable-race-card will agree. So, what’s with the criticism? So you’re not an Ayn Rand maniac. Sure, I know what you mean. But, you do support the ideology on display in the book. And poor people made bad decisions. And big governments, without even knowing better, pervert neutral (possible even good) corporations by forcing them to do naughty things. The distinction then is that you got your political positions through careful observation and calculation over a period years, not from some dimebook novel. Which I dig. But, that doesn’t exactly flow with “oh you’ll understand taxes when you’re older sonny jim. Ben Franklin said that.” Cryptic Mirror: “Real money is the amount you have to spend. Abstract money is the amount you don’t have to spend but like to think it is.” I understand what you’re saying and endorse the principle reflected. My objection is more OCD re the use of “abstract” and “real” to differentiate the necessarily identical items. @Todd, It’s really hard to take you too seriously when you bring out a Foxesque talking point like: You’ve had to suffer the effects of a century of Labour. You’re likely grateful if the government leaves you anything at the end of the year. Except that the Conservatives have been in power in the UK for most of that century, and Margaret Thatcher was the longest serving PM of the 20th century. This isn’t really a relative thing. When my effective tax rate at the end of the year on the kind of income my wife and I earn is 16%, we’re paying too little tax by a huge margin. It’s not a matter of opinion but rather of fact. And you also still dodge the point on GM. I can’t disagree with strategically what a government *should* do at a conceptual level but the theoretical operation of government and the real world are two different things. Letting GM fail catastrophically in the middle of the worse financial mess in almost a century would have been, well, catastrophic. And yes, GM possibly will fail long term. But on this occasion, the government made money on the deal and next time, with luck, they can perhaps afford to through thousands out of work and let thousands of supply chain related businesses fail. The Obama administration had no choice given the situation they found themselves in. Claiming otherwise makes no sense except if you’re trying to make political not practical points. Apologies for the double post: I saw John noted Michael Lewis’ “The Big Short” as a good read on the subject. I want to whole heartedly second that. This book is required reading for anyone interested in the subject. Reference: facts, opinion, entitlements. Second, FYI Todd and Mallet of Loving Correction: noting John’s concern over personal snipage that popped up while I lazed through my comment, I’d like to preemptively note that the criticism is more tongue in cheek than serious. Todd: “Few Americans who are old enough to pay taxes would assert that our taxes are ‘WAYYY too low’.” Whether Americans would assert such a thing is an entirely separate discussion from whether, given the level of service Americans expect from their government, their taxes are too low. Let’s not confuse the two, or attempt to use the assertion of one to wave away the other. “You’ve had to suffer the effects of a century of Labour.” In point of fact the Conservative Party has held Parliament more and for longer periods of time, than Labour has since 1900. “What the Obama Administration has done is simply to delay the inevitable.” It’s worth remembering that the initial loans to GM were provided to it by President Bush. Todd, I understand you’re enjoying playing with rhetoric, but try to have some facts in there, and some logic as well. Ugh, that was terser than I meant – sorry about that. You’ve had to suffer the effects of a century of Labour Sigh. Start with Orwell’s _The Road to Wigan Pier_, understand what (not-privileged) British life was like in the pre-1945 era (short answer: awful), and then get back to us. I have been scared to hire an accountant after years of writing checks to my Mom’s CPA – $1700 each for her and my deceased Dad’s trusts. This was in Grand Rapids, MI. I never understood why it cost so much. I didn’t see anything particularly complex about them. Sorry to be nosy, but could you give some idea about what a CPA costs? CPA’s are risk mitigation, it would be nice to add some cost versus benefit to the equation Are we sure that accountants aren’t the ones spreading hysteria about taxes, so as to encourage people to seek and retain their services? (Similarly, I’m pretty sure that financial advisors are responsible for the popular “you won’t see a dime from Social Security” meme.) Matt McIrvin: I’m not being paid by the accountants to recommend people with complex tax profiles retain their services. I just think it’s a sensible course of action. Thanks for the post and the inspiration. I was looking for something to help me pull together my thoughts on this today and this did it. Two off-topic things: 1. The free market is an inherently horrid way to set policy because, unlike a constitutional representative republic, it doesn’t guarantee to protect the rights of minority populations. It’s pure mob-rule democracy (and the bigger your wallet, the bigger stick you get to wield in that mob.) 2. Todd, USA Today is not an unbiased resource. /actual journalist Now, as to topic… I will agree that the tax code is unnecessarily complicated as it stands. However, progressive taxation in and of itself is hardly a burden on anyone, and is in fact extremely more fair than flat taxes, for instance. The reason for this is something I mentioned a few posts ago: Relative cost of living. People in the lower tax brackets pay significantly higher percentages of their gross income to basic living costs, and paying less of a percentage in taxes helps balance that out. I have no problem whatsoever paying higher rates on the portion of my income that goes above various points, because I know very well that some poor schmuck who’s making less than half of what I do is still paying that same $3/gallon for gas that I do, and that cost is a far bigger hit to his wallet than it is to mine. (I also have no problem with my tax dollars going to help folks who are disadvantaged for some reason or another. Kinda wish less of it went for military spending, but welfare? I have no problem with that. Cutting welfare doesn’t solve the problem of people having kids they can’t afford. It only punishes the innocent kids it helps to feed.) I HAVE worked for $10,000 a year, gross, and been damn glad to have the job. And $2000/month for daycare is $500 a week, which is absurdly high, even for a place like San Francisco. Make that $10,000 a year, NET…and I was actually scraping by until the energy companies decided that Hurricane Katrina was a great excuse to raise the cost of energy to the point where my bill went from $220 to $600 a month. And no, I don’t heat with oil, coal, or natural gas. My house is all-electric, with most of the local grid powered by Vermont Yankee or Hydro Quebec. I’m doing better these days, but I’ve always used either an accountant or gone to a tax preparation place like H&R Block. If I’d tried to do my taxes myself last year I would have completely missed the stimulus-induced tax credit I got for putting on a new roof, which gave me the largest tax refund I’ve ever had in my life….. I love this thread. * Mr. Scalzi notes that many rich Americans are upset about their tax burden. * Mr. Scalzi notes that many of these same rich Americans don’t demonstrate an understanding the US tax code. * Mr. Scalzi notes that many of the above rich Americans believe the problem lies with the US tax code and not their ignorance, and that they use Objectivist philosophy to buttress their beliefs. * Mr. Scalzi offers a perfectly good solution, Objectively-speaking; that tax-averse Objectivist-Americans hire private-sector tax code navigators (or accountants) to engineer compliance with the law so as to minimize their clients’ final tax burden. * Many rich Americans proceed to exhibit in this thread something I like to call “willful ignorance.” When my income situation changed a few years ago, I hired an accountant and a financial planner, because what I am skilled at doing is not quite the same as what needed to be done in order for my family and I to meet our goals. Our skilled knowledge workers do their thing, which lets my wife and me do our thing. Charles’s anecdote in comment #44 “Guess how much it cost? What he saved in taxes” is at best, well, anecdotal. Certainly my account and financial planner save me much, much more than they cost me. Of course, I did do my homework before I hired them, and they were kind and professional enough to enumerate their fees up front. tl;dr Tax protestors are unintentionally hilarious. Accountants are good for lots of folks, but do your homework before you hire one. I’m not being paid by the accountants to recommend people with complex tax profiles retain their services. Oh, of course not; I’m just spouting about politics. It sounds like you’ve got a good, competent one; your previous writings suggest that you are in fact pretty damn wise about money; and the post may actually spur me to get an accountant, since I suspect my own tax profile will pass the too-complex-for-TurboTax horizon sometime soon if it hasn’t already. But… my impression is that people in the financial-services sector do tend to lean right politically, and that they sometimes pass the associated lore on to their customers. More financial planners than accountants. That tax code does not seem to be a problem for many people: Todd@82: You are under the common misapprehension that what the colonies were objecting to with the “Stamp Act” because it was a financially oppressive burden. It wasn’t. The objection to the tax was not its size, but the fact that it was levied by a parliament with no colonial input. The people then weren’t against taxes per se, just taxes levied undemocratically. The the battle cry was “taxation without representation”, not “taxation is theft”. The idea that the founding fathers were generically anti-tax is a bit amusing given that George Washington took the army into the field and put down a tax revolt in the first year of his presidency. (Look up “The Whiskey Rebellion”.) Back to the topic: personal tax programs are fine if you have simple taxes. I’ve used them for the last 3-4 years and have had no troubles. (My wife and I are both salaried and our main deductions are mortgage interest and charity.) Years before that, I was a contractor, she had a personal business and we owned stock. There’s no way I’d have done taxes then without an accountant. [Deleted because this particular "I'm leaving in a self-righteous snit and never coming back, so there" comment was especially boring -- JS] A search on “AMT” found only one comment on this thread. Beyond the basics of taxation you mention, most people over 250k family income per year are paying AMT. So any kind of “here’s how the marginal rate” works level discussion for the wealthy poor needs to include an AMT orientation. Bush raised the AMT cutoff to 250/family, and that’s way Obama says taxes won’t rise for persons making under 250 — he’ll keep the Bush AMT cutoff. I don’t think Bush actually lowered the AMT rates btw, just the cutoff, so I’m not sure how Mr H ends up paying much more in taxes. He must pay AMT now anyway. I loved the bit about the “Taxation = Theft” crowd winding up as Objectivist jerky. Their philosophy is one that could only arise in a landscape in which the state-supplied infrastructure is so ubiquitous and the state-supplied military and law enforcement so effective — that is, the suburbs — that it’s possible for them to lose track of the fact that life as they know it bears no resemblance to a state of nature. Or, as I said sometime around 1977 or so, when I was deciding that Obectivism was silly and Libertarianism unreliable, I keep thinking about what happens if I have a run-in in a dark alley with a seven-foot-tall sociopath who hasn’t read Lysander Spooner. PClark@89, I have my taxes done by an excellent CPA who charges me $300-400 depending on the complexity of the return. I was a little surprised by the person who reported $800, and I wonder if they might have overpaid, but it’s hard to say because it all boils down to how complicated your tax situation is. Mine is moderately complicated (post-divorce, dependents, self-employed, investment income, quarterly payments)–but I get the impression it’s relatively simple compared to that of many of her other clients. Also, there is no state income tax where I live. When my taxes were simpler, I remember paying under $200 (same CPA), but that was a while ago. Some tax preparation services are kind of a scam. They prepare your tax return for free or at very low cost, and make their money by issuing you an “instant tax refund,” a high-interest loan of your tax refund (no risk and very high profits for them) and this is where they actually make their money. The free or cheap tax preparation is the lure to bring you in so they can give you the high-pressure sales pitch for the loan. I doubt the people who prepare returns in houses like that are very qualified–it is not their core competency. Self v. Accountant: One year I filled out the tax forms and was worried about it enough to bring it all to an accountant. He looked it over said he couldn’t do better and handed it back to me without charge. If you REALLY understand your finances then sometimes the accountant can’t do better, especially if your income comes completely from salary and you don’t have complicated deduction opportunities. But it also doesn’t hurt to check with an accountant periodically to see if you are missing something. Also: Do you know your actual tax rate? Turbo tax tells you and it can be surprisingly low when all is said and done. Finally, Marriage penalty: I make more than 2X more money than my husband. Yet we never find ourselves discussing if he should quit his job due to taxes. I don’t get the bitching about the “marriage penalty.” Why is the woman’s salary always the one that is on the chopping block? Is it because the marriage penalty people are starting from some weird “women working outside the home is new and different” attitude? Not in my family. All the women in my family have always worked – farmer, doctor, lawyer, librarian, etc. My mom also made more than my dad so I totally do not get the starting point of the marriage penalty complaints. If one spouse doesn’t work then you are spreading one salary over two people — the tax code as it is written is a huge marriage give away to the folks who want one spouse to stay home. Why is this called a marriage penalty when it is actually a give away to the stay at home spouse model? Which might lead to the second point — the marriage penalty people are ALWAYS throwing in childcare costs. This discussion has been refreshing to see people pointing out that taxes and childcare costs are actually not the same thing. But the marriage penalty discussions that I’ve seen simply consider childcare costs a tax on women’s work. And that view has a lot of built in assumptions. I just want to speak up and tell people that those assumptions are not universally valid or shared. After my husband and I moved to the Bay Area, we were able to do our own taxes for about two years until we hit that inevitable wall of “holy crap our taxes have gotten complicated” that comes with living in California. It also occurred at the same time we bought our condo. Home owning instead of renting meant more things to figure out tax wise. We asked our friends and co-workers for accountant referrals and to this day when tax time comes around I am so glad we did. We have a fabulous accountant. She answers our questions by phone or email, meets with us in person when we’ve needed to and has made our lives much easier when tax time rolls around. I am going to give our accountant a box of good chocolates and champagne when we hit our fifteenth year with her next year. I’m serious. She has saved us a lot of money over the years–even when we’ve owed money. She’s been worth every dollar we have paid for her expertise. I shudder to think of how we could have wasted weeks of our time and royally screwed ourselves up every year if we hadn’t looked at our situation honestly and said “We are smart, educated people who have no clue how to do this the right way. We need help and we are going to get it.” As a CPA, I thought I should chime in. However, as an attendee next week at the VP Workshop, I find myself hesitating, not wanting to set myself up for a public evisceration at the hands of Mr. Scalzi. What the hell… The tax system is a hodge-podge of rules and regulations that have been layered one on top of another for decades. Some areas of the tax code are horribly unfair and/or make no sense (AMT, Marriage Penalty, Student Loan Interest Deduction), while other areas have helped nudge the tax code in the proper direction (529s, retirement plan enhancements.) The tax code is confusing, often complicated, and so voluminous that it takes a ridiculous amount of time to get a definite answer to a simple question. (Go to and give it a whirl.) That being said, is everyone on this board intelligent enough to do their own taxes? Of course – even the numberphobes. The real question is, are you willing? Could I, a novice when it comes to all things mechanical, crawl up on the roof and fix our recently broken air conditioner. Yes, I’d like to think so. Do I have the twenty hours, the special tools and various instructions and schematics that it would take for me to accomplish the job? No. Would I rather pay someone (a professional) to come out and have it done correctly (with a guarantee) in a couple of hours? Absolutely. At some point, benefits outweigh costs. Where those two lines intersect depends on your income level, your financial acumen, and your availability. Bill – Even factoring in your $2,000/month child care number, the $10,000 net seems a little thin. Catherine S – You’re right. You would be amazed how many people don’t get beyond page 2 of their 1040 . Refund or amount due is all they care about. Crayonbaby – This may be off topic, but $1,500 to $2,000 a month for child care doesn’t seem that out of whack. You get what you pay for. $2,000/mo divided by 4.333 weeks per month divided by 50 hrs/week (for a full time employee who also commutes a fair amount each way) is only $9.23 per hour. That doesn’t seem like an excessive amount. John Gordon – AMT is one of the top areas of the tax code that produces the most ANGER and confusion. People who simply pay state taxes and property taxes now get “ensnared” in AMT. This was never the intent. Unfortunately, it produces far too much revenue for Congress to simply do away with it entirely. Paying taxes for social programs for everyone is very much along the lines of enlightened self interest, at the very least, without having to care about anyone beyond yourself, it’s avoiding putting someone else in the position where they need to take what they need from you by force or die. The income tax system isn’t complex at all until you get into deductions (whose complexity I have efficiency, ambiguity and enforcement gripes about but that is not germane to this conversation) and AMT, in fact, lets take the single 70k taking the standard deduction. Federal income tax next year? Around 11k (~15% effective) with a 25% marginal rate. That isn’t much, that isn’t much at all. 150k is still down at ~22% effective with a 28% marginal rate. Disclaimer: My state (WA) doesn’t have an income tax yet, hopefully we will soon, but we don’t at the moment. I’ve been fortunate to have had quite a few years of six and even seven figure income before I retired. My tax returns were always fairly simple–mostly regular income, some stock option transactions and some charitable deductions. [I used Turbo Tax 'cause I like to look behind the curtain and see how the tax code works--if I hand it off to an accountant I learn nothing. I have never been audited in 30 years of filing the full 1040 form.] I don’t have a problem paying my taxes because I like having a nearby fire station with paramedics and police officers to keep me safe. I enjoy national parks, museums and, while I wish we were doing more in space, I even like NASA. I like driving my car on paved roads and riding my bike on public bike trails. My tax dollars provide many wonderful things that I enjoy and appreciate (and some I really, really don’t like too). Our tax system is far from perfect (I agree it’s too complex for most to understand as stated in this comment thread), but we do get a lot for our $$ whether we are able to see that or not. JS and all others who are “rich” (according to guidelines) and do not want to keep the tax cuts: As I understand, there is nothing preventing you from paying more that what the IRS tells you that you owe. If the tax cuts are extended for the rich, will you be paying the difference, or more? I work for H&R Block every year … I have several clients who own their own businesses (I’ve taken the optional corporate return classes), and since we charge per form, they’re not paying significantly less that what my roommate pays his CPA. Certainly interview a few before you hire one! I’m in the middle of my refresher courses right now. Yes, we do use software – but software is not infallible. If I think that the client should have gotten a certain credit and didn’t, I can enter the info manually. Plus, there’s such a huge list of allowable deductions, most people miss things. Almost everyone I talked to last year didn’t realize that prescription eye glasses are valid medical deductions. Every single nurse forgot to count her work shoes into her uniform expenses. I agree with the man who spoke about hiring someone to fix his air conditioner … I recently stopped doing my own website, because I was spending more time looking up how to do things than I was posting products. The technology is moving too fast, and it’s worth it for me to pay someone else to do the coding. However, sitting down WITH your CPA or tax preparer is essential! You *can* just drop off your paperwork and leave it to them, but another commenter pointed out that you know your financial situation better than they do. Also, if you’re there, I can ask “Does this amount include your shoes?” :D LeftField: I’m not sure how what I do voluntarily with any of my money is any of your business, actually. I really enjoy your books, Mr. Scalzi. Keeping writing! @LeftField: I have thought a lot about that, actually. My short answer is, I can also help out by not leading my family into a situation where they must use extra social service resources–so no, I don’t plan on donating extra tax money to the government above what is currently expected. But as our income goes up, we can afford to donate more to specific non-profits whose missions I admire and want to support. (I also recognize that some of these non-profits would be offensive to others, and so I don’t expect my tax money to be used–but I am grateful for both government-funded services and NGOs…) I recall listening to a speech given by Rick Steves (the travel guy). I remember him talking about the cost of being the richest nation on a hungry planet, and at a smaller level (and more relevant to this discussion) he talked about a phenomenon in Mexico. If you’re riding down the street in a car, and you suddenly run into several speed bumps in a row, you know that if you look out the window you’ll see a high wall with a very big house behind it. The speed bumps are there so that when someone tries to throw a molotov cocktail through your window they can’t speed off quickly, and the wall is there for similar reasons. These are what the nice houses look like, and it’s one of the effects of being affluent in a context where poverty is the norm. Could our taxes be lower? Sure, but I get the feeling that there are a lot of “fringe services” that aren’t readily visible that our social programs buy us, even if we might not benefit directly ourselves. That’s a long argument to make, but it’s one of the ones I like when discussing “is our taxes too onerous?” I think LeftField has a point. After all, we are free to pay extra money to the IRS if we wish. In fact, I think we should adopt this policy even more. There has always been a big push from conservatives to fight wars in Iraq and Afghanistan that I don’t believe in, so I don’t see anything preventing them from buying their own AR-15 and heading over there themselves. All of that corporate welfare? How about if those who think that’s a good idea send the money to BP and Exxon themselves. And, here’s a thought: All of the rich people who want that tax cut so much, how about if you assume some of that debt it will entail, so I don’t have to. I think all of the Bush tax cuts were a bad idea. Yes, I’ve benefited from them personally, as have most everyone. I opposed them because I think we would all be so much better off today had they not occurred. I advocate for a tax policy I think is better for the country even though it will cost me money. Many people who care about this country feel that way. Others feel what is important is that they get theirs. But, they sure do want that massive military budget. The funny thing is that those so in favor of tax cuts don’t realize that those cuts probably cost them money. At the end of the Clinton administration we were on track to pay off the public held debt in about 10 years. Had we not had one bungled war and one totally unnecessary war, a new massive entitlement program, and huge unfunded tax cuts, that might very well have happened. And, that 10 years would be right about NOW! The interest on the national debt is a huge expenditure (and it’s going to get much, much larger when interest rates rebound). Just a few years ago it was approaching $250 billion. It’s the third largest line item in the federal budget and could have been gone by now. Then what tax cuts would have been possible? Excellent advice. I use tax Cut myself, but I have a pretty simple return. The software helps me to make sure everything is entered on the right line. If I had investment income, became a landlord, or ever collected book royalties, I’d call an accountant in a New York minute. Your post also reminds me of my pet peeve about the tax system: taxes aren’t too high; they’re too complicated. Any system where your tax obligation depends in large part on how good you are at filling out the form (or how much you can afford to pay someone else to do it) is inherently unfair. @Charles: All for want of a couple hundred votes in Florida, neh? John@110 “I’m not sure how what I do voluntarily with any of my money is any of your business, actually.” You talk about your paycheck as though it’s rightfully yours. How quaint. I’m sure that was an attempt at sarcasm, there, Adam, but it was lost in the incoherence of the statement. Would you care to clarify? The answer to LeftField’s question is no, I wouldn’t pay the IRS because I’m not in that income tax bracket. If I were in the bracket, I still wouldn’t because if the tax cuts are extended, the economy is going to go into the toilet again, the investments I have are going to drop in value, my home is going to drop in value, the costs of everything are going to go up, we’re probably going to be losing income, etc. and dropping back out of the bracket anyway, so it wouldn’t probably matter. Then again, I might do it in a futile gesture as I watch the .01 millionaires turn us further into Mexico. Sorry to double post if that happens, but I just got sent this: Look, a wealthy person who understands math, economics and long term thinking: @John It would seem that the taking of what is rightfully yours should be called theft. Since you seem to treat those who see taxation as theft with derision, I can only assume that you give what is rightfully yours willingly to the government for the furtherance of a greater good. But is this not the case only up to a certain point? You may be fine with the government taking something around 30% of your paycheck to do with as it pleases but what about 50%? 75%? Is there a point beyond which you view taxation as theft? Or is it the case that the money you earn is not yours by right but only by the benevolence of our leaders? On the issue of voluntarily paying more than one owes in taxes, a couple of points: 1) Let’s suppose that the suggestion was taken to heart, a miracle occurred, and a million people each decided to pay $1,000 each in excess taxes to help bring down the national debt. Wow! But that’s a billion dollars, just enough to pay for what–one or two weeks in Iraq and Afghanistan? Under the best of conditions, volunteer payments are, well, bupkis. 2) The suggestion arises in part out of the mistaken assumption that those proposing higher taxes do so out of some crazy love for paying taxes, and not a recognition of necessity, i.e., the government has to pay its bills. The inevitable rejoinder to such proposals–wow, you Democrats/Liberals/Infidels must love paying taxes–makes no sense to me. I get a colonoscopy every few years, but no one has ever said to me, “Wow, you must love getting a cable shoved up your butt,” probably because it would make the speaker sound like an idiot. This might be only tangentially related, but all this tax-talk made me wonder about writers and taxation. I know you have to cover the SE tax (i.e. both halves of social security and medicare tax), but do you have to register as a business entity to file the Schedule C? When I ran a cafe, it was as a sole proprietorship with a dba, and the tax ID was my own ss#. So that has me wondering how it works when you’re an author or musician or working artist… (thanks in advance to anyone who knows the answer). I realize that you Americans (howdy from Canuckistan!) have an oppressively more complicated set of tax regulations than ours… It troubles me, John, that you *need* to recommend that people behave as if the tax code is a set of incomprehensible magic. There’s a bit of a SF-versus-Fantasy thing here; the effects of tax policies *ought* to be scientific, predictable, things that we can reason about and discuss based on the facts. It is really troubling when the discussion heads over to the “fantasy” end of things, where it is apparently not possible to know, understand, or predict the effects of the regulations. If they’re “magic,” then there’s no gainsaying what people might say, and that’s incredibly corrosive for public debate. I’m familiar with how Canada dealt with things similar to the tax reductions that are the root of the present controversy, and while the calculations were pretty weird (see), it wasn’t *such* black magic that “mere mortals” couldn’t possibly reason about it. For my small “scientific contribution,” let me observe that I have encoded a portion of the Canadian tax regulations in Prolog. I’m not sure if I got our “T691″ fully done; I’m sure it doesn’t apply to me, so validating it perfectly didn’t worry me terribly much :-). Kat @120, great article, thanks for posting. Re: an earlier comment about just dropping off one’s paperwork with the CPA, does anyone actually do that? I sit down with my CPA and go over everything in detail. Then, when she delivers the return, I’m required to go over it in detail and check for errors before I sign it. (Once I did find an error.) Even if I didn’t find it worthwhile to double-check my return, it’s worth going over just so I understand my taxes and where my deductions are coming from. With that knowledge, I adjust my financial behavior to minimize my tax burden. Another nice thing about my CPA is she nags me about doing those financial chores I know I should do but that I tend to put off until it’s too late, like setting up 529 plans for my kids. She mentioned it every year until finally I got it done. Getting an accountant is one of those things a responsible adult should do. Filing every year costs us less than $200, and the accountant always finds an extra deduction or two we didn’t know about, thus paying for himself. As soon as I got married, I stopped doing my own taxes and hired a professional. I don’t want to be the idiot who messes up the family finances because I was too cheap to know I need someone to assist us. Adam: “Is there a point beyond which you view taxation as theft?” In a democratic society, where the tax rates are decided upon (or at least ratified) by a legislature directly elected by the people? No. This is not to say that there isn’t a level after which I might feel my level of taxation is too high. But that’s a separate discussion from whether I consider it theft, given the governmental system we have. @John In that case, your money is not yours by right but is only yours, for now, because the government (and the people, by extension) allow you to keep it. So your original response to LeftField seems off base. You refer to your money as “my money” (it seems that it isn’t really…what does it mean to own something that can be rightfully taken from you without your direct consent at any time?) and you say that it’s none of his business what you do with it (it’s very much his business, as it is the business of every voting age member of this democracy, since it would seem to be our job to decide how much of your paycheck you get to keep). Or is it the case that the money you earn is not yours by right but only by the benevolence of our leaders? Adam, serious question – do you view, say, home owner association fees in the same way? Taxes are a fee to live in a society that provides you with all those wonderful things like Pools, BBQ decks, 24 hour concierge’s who’ll sign for parcels and pick up dry cleaning and so on. You have plenty of options for paying less tax (moving to a more tax friendly state) or going off grid and leaving the US and living somewhere with no tax and giving up your US citizenship. If you want to be a member of the US of A home owners association and enjoy a standard of living like you do, then there will be a fee associated with it, whether you live in a socialist society, like, say, Germany, or a Libertarian one, like… hmmm… I wonder why there aren’t any out and out Libertarian societies in the world? Adam: “In that case, your money is not yours by right but is only yours, for now, because the government (and the people, by extension) allow you to keep it.” Eh. Leaving aside the fact that money is fundamentally an abstract intellectual concept, and so speaking of having a right to it requires an entire suite of basic assumptions that people can argue about until the cows come home, even if one is to accept this particular postulation, whether my income, less taxes, is mine by “right” or by the sufferance of hoi polloi does not change that it is, in fact, mine; the government does not have a claim on it. Whether it may in the future is irrelevant supposition. And what I do with it is neither LeftField’s business nor anyone else’s, if I choose it not to be. Shorter version: Your attempted intellectual gymnastics to suggest my money isn’t actually mine leave me less than satisfied. Daveon @129 Well said. Adam @128 I feel like you’re trying to skew a nuanced conversation into one of two extremes, and equivocate any position between those extremes as one or the other. That’s kind of a “high school debate club” way of looking at it, and I don’t think it serves your argument. @Daveon Actually I don’t view homeowner’s associations in the same way for a couple of reasons: 1) With a homeowners association I have an actual contract with terms that are explicitly spelled out which I can choose to accept or not. The homeowner’s association has authority over me because I explicitly give it such authority if I find their terms reasonable. Without my explicit consent the association has no authority. In the case of society, I have given no explicit consent. The argument may be made that my decision to live in a geographic region controlled by some government implies my consent to whatever actions the government may choose to take on my behalf. However, this assumes that the government can legitimately enforce this social contract by removing me from this region if I choose not to live by their terms. You are assuming that the government is legitimate to prove it’s legitimacy. 2) The homeowner’s association contract cannot be used to justify anything and everything that the association does. The terms are explicitly spelled out. With the social contract, there is the assumption that anything a duly elected government does it does with the consent of the governed. Nobody actually believes this is true. Everybody has some concept of an unjust law. That is, a law which has been enacted by an elected government which does not conform to some standard of justice. If there is no standard higher than the government, how can any action of the government be criticized? Why criticize laws that discriminate against same sex couples, for example, if those laws represent the will of the people? You can try to convince others that the laws should be different, but you can’t call those laws unjust. 3) The terms of the homeowner’s association contract cannot change at any time without my explicit consent. I think it follows closely from my last point that the terms of the social contract can not only be used to justify any government action but that they can justify any government action at any time. @Adam, Sorry but not one of those 3 arguments actually holds water in the context, without, effectively, ignoring the entire basis of modern western industrial democracy. I don’t want to go into a full deconstruction as I suspect John will object – but unless your argument is that government of any kind is an un-necessary burden on the individual then not one of the points you give makes any sense. If government is like that, why aren’t more people rushing to live where there isn’t government? @Daveon I’ll stop now since this is definitely getting off topic. I guess we can continue this if an appropriate topic presents itself. Yay! People want to stay on topic. Adam gets a gold star for the day. No, seriously, thanks. I bought a house recently, and didn’t know until I’d been living in it a month or two that the mortgage interest was tax deductible. This was about the time I started noticing people on the radio talking about how stupid this was, and how we should eliminate it. Gotta admit, I’m kinda torn. On one hand, it does sound like a bit of an excessive bonus for owning a home, on the other hand… yay excess? Here’s a question: in the midst of a tax code that most people would agree is mind-bogglingly complex, but that offers (to many) a simplified code for use….how does one determine if their tax situation is ‘complex enough’ for an accountant? I’m not being snarky; I’m unclear on the middle ground. Obviously, if you work a job with an odd pay-cycle like Scalzi or have spouses who work in multiple states or rent property and such, that’s complex and an accountant would generally be of benefit. But what if you have few investments, a simple mortgage and happen to earn a lot? What makes your tax burden cross that magical barrier between TurboTax to CPA? @Ben #18, John #22: It’s a real scenario, not a Fox dystopian nightmare. they are in an expensive part of the Bay Area (Mountain View, just north of San Jose). Again, I’m avoiding giving any personal information about them, but the woman is brilliant, and was one of three authors that discovered one of the major classes of internet computer vulnerabilities a couple years ago. Having her not working might be good for her family, but it’s not good from the point of view of the country. When you have high marginal tax rates, it creates too large a disincentive. Steve #46: They are renters, because on their six-digit salary, they can’t afford a house in Mt. View. Cryptic #48: $10,000 net a year doesn’t buy you much in the Bay Area. I’ve lived on $20k gross a year in San Diego, and it’s not fun. It becomes a serious question, after you run the numbers, how much spending time with your babies is worth to you. For her, it needs to be a lot more than 10k a year. I won’t drive anything besides a (current model year, of course) Porsche Carrera to work and back. My restrictions are my own. I could drive something with under 350 horse power and poorer handling, but I refuse. And can you believe that after my car payments, I’m only pulling down -$30,000 a year? I swear, taxes around here… Please. WizardDru: “What makes your tax burden cross that magical barrier between TurboTax to CPA?” For me, as I noted in the entry, I think when you pass from the 1040-A or 1040-EZ form into the longer form you should definitely consider it, or at least doing an initial consult. Others may find their mileage varies. Suffered under the effects of Labour here in the UK? I personally have lost more money and living standards under the Tories than Labour. So I’d say no. My living standards increased drastically and I had much greater liquidity under Blair/Brown than Thatcher/Major. I also love it when I read statements from the US about leaving kids a huge tax bill for #foo. Particularly when #foo=healthcare mainly because we had all those statements in the UK back in the late 40s and 50s and I’m one of those children that was left that tax bill. Money well spent. That tax bill, compared to the alternative, is gratefully paid. I’m glad the previous generation made the choice they did on those. And god bless Aneurin Bevan in particular. People in the US, consider this a message from your future kids. @Bill, how lucky for you that it can be a choice. Living with real money means the choice comes pre-made. I do not think tax is theft but I sure do think the tax code is moronic. Understanding how something works is orthogonal to whether it is stupid or not. Being for or against progressive taxation is one thing, recognizing a piss poor implementation of progressive taxation is something else. Your average third grader could come up with a better plan. It’s also obvious to me our tax dollars are being misspent. I cannot see how anyone can argue with that? Huge portions of the federal budget are defense spending (we spend more then the rest of the world combined) and health care (the evidence we are inefficient compared to other nations is overwhelming IMO). Defense spending alone is justification for “misspent”. So, we have a moronic implementation of something you may or may not believe in (progressive taxation) and then a lot of the money is pissed away. Why should not people be angry? @137. But turbo tax, do a dryrun of your return, then get a CPA and do the file. Diff the amounts. I did that, CPA came out ahead but not by a huge margin. Really is not a huge differnce from what I have seen when your situation is simple and your income is high, AMT dominates anyway. though based on some comments here I am going to try a different CPA next year. FWIW: Tim Geitner, Secretary of the Treasury used TurboTax to do his taxes. He still got them wrong and had to pay a penalty (albeit only after he was caught). I’m not saying that we should have a Secretary of the Treasury who is smart enough to do his own taxes. Rather I think the tax code should be simple enough that even an Ivy League educated Secretary of the Treasury should be able to do them without the aid of either an accountant or TurboTax. Well, I have to agree. I (heart) my CPA, but I’d (heart) a simpler tax code even more. Scalzi: Good Point. Think I should do a consult, just to see if I could make things work more in my favor. Bill@138: Bill, that’s really not a statement on her taxes, either way. They have self-selected to choose a day-care situation that is pretty outside of the norm, as you specifically cite when you refer to it as a “specific-language-speaking elite preschool“. That she’s only pocketing a around $10K after expenses for working an additional job seems related more to the specific situation she’d like to choose than the actual taxation situation. She could choose a less expensive, less elite school or arrange for a less expensive commute and so on. She isn’t working for $10K, she’s working for $70K, but then paying expenses for the privilege of that job which mean that she’s not pocketing that much. That’s not the same as only getting $10K by itself…and most folks wouldn’t count a car payment, insurance and other proximate costs as tax costs. What you’re paying for with an accountant isn’t the ability to fill out Form X. The IRS’s forms are a little complicated, but you can fill out almost all of them given the instructions and a little care; they’re not rocket science. The accountant’s added value is knowing that you need to fill out Form X in the first place. The tax code is emphatically BAD at telling you “you have this particular type of income, or type of expense, or type of property, and thus are eligible for this rebate/required to fill out this form/subject to this extra tax.” A monkey can fill out 1040A, but I know several people who came back hundreds or even a few thousand shy because nobody ever told them about Form 8863… I think a lot of the attitudes that are causing the Tea Party swell aren’t strictly “my taxes are too high”, although certainly there’s that sort of person and they haven’t necessarily thought things through very well. More than that, though, it’s the feeling that the government’s disassociated income and expenses altogether. When the deficit’s 100 billion a year, you can say “well, we ought to tighten our belts a little, and carefully cut out waste, and then we’ll be able to get back to zero.” But if it’s five times that, or ten (!), you’re not talking about trimming the fat; you’re an unemployed guy eating at fancy restaurants every night, because the creditors haven’t caught up (yet) and why not live it up until then? I don’t think that some increases in taxation are a bad idea if the government’s got its fiscal house in order otherwise; I could afford it. But if we’re going to run a deficit the size of a prosperous European country’s entire GDP every year, why pretend that slightly more taxes are the solution to all our problems? Avatar @ 148 – except that the #1 factor in the exploding deficit is Bush’s tax cuts. The #2 factor is funding two wars via tax cuts. Remember, we went into this century with a budget surplus. Bill@138: The irony is that the mortgage interest tax break has the economic affect of propping up housing prices. (Because people can afford to make a higher mortgage payment than rental payment.) Others have said it, but I must thank you personally – ‘…turned into thin strips of Objectivist Jerky by the sort of pitiless sociopath who is actually prepped and ready to live in the world that logically follows these people’s fondest desires’ is the best one-sentence disassembly of Rand ever. In re all the “We make $150,000+ we don’t need a tax preparer” folks in this column: oh yes you do. Yes, the first time a professional preparer did our taxes, the tab was $1200. The savings was four times that *for a single deduction* we didn’t know we were eligible for. The suggestions she made and the changes we made grew that to $10,000 that first year, and $5000 per year afterwards. Every year. So even if your situation is similar and you spend $800 to reap $800 savings, well, *now you know how to do that yourself* and can save the $800 every year. Cheap education, I’d say. And for those of us whose situations remain complex, keep going to the specialist. As for the rest of this . . . yah, almost everything else I’d say would be off-topic. Not that paragraph 2 isn’t, but I’m noting it as (a) a public service and (b) a second anecdote that describes a much different outcome. On the one hand, people think they should get special tax breaks for doing The Right Things, whether that’s saving for retirement, owning a home, outsourcing childcare, investing in stocks, etc. On the other hand, people want a simple tax code. Well, you can’t have both. All those tax breaks are a large part of what makes the tax code complicated in the first place. And when push comes to shove, most people seem to prefer keeping the breaks– at least the ones they can take advantage of. (I recall a large chorus of howls the last time there was a proposal to eliminate the mortgage interest tax exemption, or to cut it back substantially.) @Ben and others – it’s worth remember that Tax Relief on Mortgage interest in the UK was done away with by that old lefty Margaret Thatcher in the belief that it was nothing more than a distortion of the housing market. Bill #138: First off, your friends are largely irrelevant to the tax cut debate as they only make potentially $140K gross, and much less taxable income, so they aren’t getting the tax cut that’s about to expire. So if Obama gets the middle class tax cut that the Republicans are holding hostage, their taxes will go down. Their taxes already went down last year when Obama gave most of us a big tax cut. But okay, I’ll bite on the other stuff: if she’s brilliant, why does she have to be the one who stays home with the kid? Why can’t the dad be the stay at home parent? Why is part time work out of the question? Why are they living in the most expensive area? Why not plan to move to an area that is less expensive with less expensive day care available? (I’m not talking downtown Oakland, but there are other suburbs in San Jose.) Why can’t they reduce their expenses to be able to afford the child they want to have? Have they decreased retirement savings? Are they prepared to sell things off? Have they gone to a financial planner? I have plenty of friends who live in New York, Boston, L.A. and San Diego who have kids, who earn much less gross and survive quite well. I lived in San Diego with my husband (sans child) on $22,000 gross income total before taxes — that’s two of us — and it was totally fun. It was also cheaper than where we live now. So I’m not really digging the whole you just don’t understand us folks in California stuff. It just sounds like these people are really inflexible and maybe not ready to have a kid. I mean, that’s $11,666 gross income a month. Not all of that income is taxed, and they can chose how much withholding to have taken out. Plus presumably they have some savings and investments putting out income. Let’s give them the mega day care of $2,000, that’s $9,666 gross. They’ve got rent, health insurance, tax. That will take a big chunk, but if it takes too big a chunk, then they definitely need to talk to accountants and financial planners and they definitely need to look at moving. They also need to look at the long term effects. Yes, if she goes back to work, they may need to cash in a good chunk of savings to pay for the daycare and put off buying a house for longer. But, she won’t be out of the market for five years, falling behind on her skills, and she’ll have the potential to make more than $70,000 as she progresses (teenagers are expensive.) Plus, as someone else noted, she’ll still be paying into Social Security and building credits there, and might be able to put a percentage into retirement savings that otherwise they lose (stock options!.) So that might be worth the sacrifice of $24,000 in extra expenses a year for a few years for the longer term financial gains. In other words, she has no disincentive to work because the government taxes her. In fact, taxes might not be her biggest expense. She has a disincentive to work because she wants to have a certain lifestyle, which is not the government’s problem. Again and again, the complaints we are getting are about people who have chosen particular careers and places to live, who insist on having children and private schools or expensive daycares, who insist on living a lifestyle they can’t actually afford and who seem to have no clue about how to manage taxes or investments to their advantage. And instead of dealing with their own chosen problems, they blame the government, even though we have one of the lowest tax rates, yes even local taxes, in the industrialized Western world. The tax code is complicated because it is full of deductions for people to reduce their tax burden. If she’s smart, she might be able to come up with home office, unreimbursed business expenses, and depreciation deductions, plus childcare and child tax credits, that might reduce her taxable income and tax to pay for a good chunk of the expensive daycare she wants. So an accountant would be worth the money. Adam I love all this talk about taxes being theft. And how homeowners associations have contracts with interactions with the governing council of like minded homeowners. It’s almost like a small scale version of how government and taxes work in a democracy. You know, with a constitution and popularly elected representatives. That said, if Imagination Land Galt Gulch has a realworld counterpart that’s functional, I’d love to hear about it and it’s immigration policy. Just wanted to say, contra Todd, that I think my taxes are too low, and I’m a well-off, middle-aged business owner. And before some jackass asks, the reason I’m not sending the surplus money in right now is that I see taxes as a collective effort, requiring fairness. An example: Suppose I go out to dinner as part a group of 10 friends. The bill, with tip, is about $250, and people throw in money. I put in $25, but somehow we only end up with $200. If I notice that first, should I just put in the extra $50? No, we should figure out why we’re short and what’s fair, which might include pressing a couple of cheapskates to pony up. @Steve Simmons: “The savings was four times that *for a single deduction* we didn’t know we were eligible for.” Okay, I’ll bite. What was the single deduction you didn’t know about that saved nearly $5000 in taxes? That deduction had to be like $20,000, right? I always hear these stories, but always want to know what tax loophole the CPA found that normal people don’t know about that was worth a massive deduction like this. And, what are the other deductions your CPA is finding that are worth so much? People keep sending me things on this topic now. Apparently, Quinnipiac University did a poll that had nearly two-thirds of those with household incomes of more than $250,000 a year supporting raising their own taxes to reduce the federal deficit, i.e. letting the tax cuts expire, which is estimated by the CBO to put $700 billion back into the budget over ten years. So, assuming that this poll is more or less on target, we are once again getting railroaded by that one third who leverage their billionaires and their politicians, and by Mr. Murdoch. But a whole bunch of other well off people have some long term perspective. @103 Mea: I believe there’s a marriage benefit and a marriage penalty; which you have depends on your relative salaries. According to About.com (and this matches my vague recollection), you get a benefit if you make very different amounts of money, but are penalized if you have similar salaries . . . at least, depending on income level (they say it improved in 2003, but still exists). I know little about it, being federally single (half of a married-in-some-states same-sex couple), so if I and/or About.com got some of it wrong–sorry. @Charles: I had recently gotten a book contract as part of my one-man consulting business. The book contract was to write about a then-uncommon operating system, UNIX. To write the book, I needed a UNIX system to verify everything on. At the time, the cheapest UNIX system you could get was about $5,000. The U.S. was in a bit of a recession at the time, so to stimulate the economy there was a one-year-only change to the tax code allowing businesses take a big chunk of any new capital expenditure as a credit – not a deduction, a credit. I’d heard of the credit, but didn’t think it applied to me. After going over the criteria, the accountant carefully explained what capital investment was and why I qualified. $5000 tax credit, on the spot. I am so going to cherish the term “Objectivist Jerky”. Blog title? Band name? E-Mail address? I will find a use for it, I’m sure. Oh, and about the difference between your own personal generosity as opposed to taxation: it’s sort of like the difference between your own personal courage and having a military establishment. And sorry, to answer the rest of your questions – a big chunk of the rest was due to the incredible amount of pre-tax income the self-employed can put retirement accounts – about 18% of overall income, if I recall the 1988 tax laws correctly. We promptly put every dollar into it. There were also the books – I’d been spending $2000 in a typical year on cutting-edge technical books. As a professional, I could deduct whatever cost of that exceeded 2% of my income. Most years, I got not credit from that. But as a business owner, if the *business* bought the books, they were 100% deductible as a business expense. They kept going onto the same shelf in the same room, but suddenly they were 100% deductible instead of 10 to 0%, depending on the year. Between all of these, they typically lowered our family income by a full tax bracket. My point here is that it took a smarter-than-me accountant to realize that when I changed from employed professional to business owner, the exact same pattern of expenditure had a completely different tax footprint. If you don’t know enough about money to know what Ricardo’s Law of Comparative Advantage is, you need an accountant. If you do, then you understand why you need one. Late to the party, just read through the thread. This just plain hit me over the head: @93 Tal “1. The free market is an inherently horrid way to set policy because, unlike a constitutional representative republic, it doesn’t guarantee to protect the rights of minority populations.” WTH? The point of the free market isn’t to set policy. The free market IS the whole point. It’s the freedom to spend your money the way you want to spend it. It has to do with living your life the way you think best. It’s the most basic of freedoms. Without that, freedom doesn’t really exist. There is a big disparity in the quality of accountants. My first business accountant (I have a small business) advertised as consultants for small business. They were terrible. Took 3 weeks to respond to emails, I had to do research for them. I switched to a new one. This guy cost about $800 more, BUT saved me atleast $15,000 in taxes the first year. I used him for 2 years. I saw what he did. I have my returns. Now I just do it my self with quickbooks and turbotax(need small business and personal). @Steve Simmons Thanks for this. I appreciate this becoming something more concrete. Though, my original comment said “anyone self-employed”. If you have a complicated situation, such as significant and non-obvious business expenses, by all means use a CPA. But, if you income is nearly all salary, there’s really not a lot of “loopholes” out there. John @153, I’d give up all the special tax breaks for Doing the Right Thing if I got a (vastly) simpler tax code in exchange. (Tax rates would have to be slightly lowered to compensate for loss of all the deductions.) Right now the tax code is so byzantine that I think the government’s attempts to encourage certain behaviors such as home ownership and charitable giving through tax breaks are undermined. If people don’t know what the tax breaks are–or under what circumstances they apply–they won’t change their behavior to take advantage of them. For example, I have yet to itemize deductions in a tax return, because every year in which my itemized deductions were higher than the standard deduction, either my income was too high that year and the deductions were phased out, or the AMT ate my lunch. What this means is that every year I have to document everything as if I were going to itemize deductions, then my CPA has to go through the exercise of itemizing deductions (at my expense), and then we throw it all out, because hello AMT. The other problem is that because the tax code is so ridiculously complicated, I go into the tax season having no idea what I will owe or be owed on my return. I may be due a refund of $5000. I may owe the IRS $5000. It all depends on what crazy deductions apply that year, whether I’ve crossed some magical AMT threshhold, etc. As someone who carefully budgets every dollar I spend throughout the year, I find the randomness of this part of my budget extremely frustrating. When this tax code is this complicated, I think it’s time to throw it out and start over. I know it’ll never happen. But a gal can dream. Let’s assume – correctly, as it happens – that I have a simple tax situation as far as I know, am getting hit slightly by AMT, and overall think that perhaps, sure, I should check whether a CPA will help. How do I pick one? [Albany, NY if it matters.] Word-of-mouth would be great if I knew anyone else using one; sadly not so. Advice sought.. @Ewan Have you considered Angie’s List? I’ve not used it for accountants, but it’s one of their categories. We had great results when hiring a heating/air conditioning contractor. @Dave in Georgia, It’s the freedom to spend your money the way you want to spend it. It has to do with living your life the way you think best. It’s the most basic of freedoms. Without that, freedom doesn’t really exist. This statement caused me a degree of physical pain too. But John dealt with this in his opening statement. Freedom is an abstract concept at best. Without laws and systems to enforce those laws, even in a Libertarian Utopia, there wouldn’t be any freedom. @172 Daveon I absolutely agree that a framework of laws is necessary to support freedom and liberty. The federal tax code as written ain’t it. In fact, it’s not even in the same area code. The federal tax code is used as a hammer by legislators to get campaign contributions from those either threatened, or seeking favors. It has nothing to do with collecting revenue fairly. I keep seeing references to paying for roads and the like. That involves state and local taxes — not federal. And the money collected by the feds and given back to states isn’t exactly efficient. It has a finder’s fee taken and strings attached. “The free market IS the whole point.” No. The free market is a tool we use to achieve certain outcomes. Mistaking the mechanism for the meaning is one of the classic errors of fundamentalism. Marxists and class, Freudians and sex, Libertarians and free markets: many of each take a good point way too far. @Dave in Georgia, I don’t think you’re getting anybody here saying that the US tax code is perfect – running a business here makes me long for the warm and fluffy people at HM Revenue myself. But your specific issue seemed to be with the collection of taxes. I’d suggest that part of the problem the US has is in the way it tries to run as a collection of independent nations even when that’s directly again the interests of the citizens and the nation. Some things should be managed nationally and infrastructure, especially where it crosses states which just can’t afford it, is one of those things. @174 300baud “The free market is a tool we use to achieve certain outcomes.” The free market is an outgrowth of personal liberty. You know — the thing the Constitution was designed to protect from the newly formed federal government by limiting its powers. Not that they’re considered limited anymore. At least by some politicians. (Rep. Pete Stark, for instance.) @175 Daveon What you just described is the very basis of a federal system — a national government with limited powers, handling ONLY things that must be done as a nation. Everything else is supposed to be dealt with at the most localized level possible — or not handled by government at all, but by the citizens making their own individual choices in living their lives. The point was to keep the government from turning into a tyranny of some sort. We’re sliding down that slippery slope, and the speed of the descent is picking up geometrically. Bill @11: do I really need to go into my rant about people who want to have one parent stay home for personal reasons pull out the kind of Silly Math you present to pretend “no, really, we’re doing this for financial reasons”? Because I’m worried about getting the Mallet and not in a good way, but I’d suggest your friends need to own up that they want the wife to stay home with their child, instead of seizing on marginal take rates and claiming it’s forcing them into this terrible choice they wanted anyway. Re CPAs, lawyers have a saying about people who want to do self-help for all but the simplest issues: “You can pay me now, or pay me later.” I imagine CPAs are similar. Please note that individuals are faced with a multitude of collectives that impinge on their freedom, and not all of them are governments. Failure to recognize that means you’re making yourself vulnerable. Pitting atomized individuals against collectives is a losing proposition. An individual is going to need to select the proper collective to team up with. @Dave in Georgia, We’re sliding down that slippery slope, and the speed of the descent is picking up geometrically. No, really it isn’t, that’s a great piece of hyperbole but it’s not remotely supported by any facts. Dealing with stuff locally is completely impractical in the 21st century. A rural part of Dakota isn’t going to have anything like the money required to maintain and run a 21st century infrastructure compared to say, Seattle, San Francisco, New York etc… What you’re advocating might have worked for a largely agrarian self sufficient population, but it doesn’t make a blind bit of sense for the modern world unless you want to create a huge underclass of poor regions without access to the cool stuff the people living in cities have. If you do that you’d end up with a perpetual split between people in the Coastal Cities with access to modern infrastructure and facilities and a bunch of people effectively stuck in the mid-20th century with no real prospects… Oh. Right. Dave in Georgia: I’m thinking of a term here. What was it? Oh yes, Objectivist Jerky. (Thanks Scalzi.) @ 179 Most of them aren’t the government. While I skew slightly to the libertarian side, the libertarian concept generally fails to acknowledge that what threatens people is the accumulation of power and resources. I would also say that the end result of every free market is monopoly and cartels, which is not, in fact, the sort of thing that leads to people having greater choices. If you owe your butt to the company store, your freedoms are limited in the practical sense no matter who free you might be in a hypothetical sense. Most of the people I’ve come across who think that “taxes are theft” ARE the sort of “pitiless sociopath” you’re talking about. But then, I may have come across them because I’m a science fiction fan. These people just love their Heinlein and their Card; I was afraid to ask if they like Scalzi. @Dave in Georgia, 176: I think you’re mainly wrong there, but it doesn’t matter: the Constitution is *also* a tool created to achieve certain results. Treating it, or Adam Smith, or Ayn Rand as a religious document is just as idiotic as the Marxists and Das Kapital, or Maoists and their little red books. I get that the free market might be the whole point *for you*, but until you get that the rest of us are trying to run a society, you’re always going to end up frustrated in discussions like this. Arguing from axioms that others don’t share is a waste of everybody’s time. Kat Goodwin and mythago: (This thread is probably old and cold, but…) I think the point that both of you are making is that most people who claim taxes are keeping them from working are probably being disingenuous. I tend to agree. Nevertheless a tax, of any size, on any transction (including pay for services) creates a disincentive to entering into that transaction. The question is does that disincentive make a difference. The answer is, only at the margins. For every job there is threshold takehome wage one must earn for it to be worth giving up one’s liesure time. For every person and every job that threshold may be different. If you want me to mow your lawn I need at least $20 takehome. If you are willing to pay me $30, we’re in business, even if I pay 20% ($6) in taxes. If someone threatens to tax me at 30% ($9), that may make me sad, but it won’t keep me from mowing your lawn, because I’d still take home $21. However is someone treatens to tax me at 40% ($12), I’m not lying when I say I won’t take the job, because $18 takehome isn’t worth it to me to give up two hours of reading/video games/time with my kids/sleep/whatever. There is nothing special about taxes, other “but-for” costs create the same disincentive (e.g., gas prices, the babysitter I have to hire). There is nothing magic about my personal $20 threshold. Some might be willing to do it for $10, for others it may take $500 (and there may be some interesting and perhaps troubling differences in how these thresholds vary according to gender). Neverthelss, a tax, even a reasonable well-spent tax, is always a disincentive. But taxes only affect the decision to work for taxpayers at the margin where the incremental disincentive makes a difference. Well no, Blue Valentine, it doesn’t create a disincentive. It’s just that some people don’t like it more than others. And nobody is taxing you at 40% for mowing a lawn. That’s why we have progressive tax rates, with lower income earners sometimes not owing tax at all or only Social Security. Try not using fantasy math. Childcare costs are a disincentive to working for lower income levels. The lack of affordable childcare can cause too large a chunk of income to have to go to childcare so that the parent can work. So what happens, usually, is that the women go to work anyway, if they can get a job, because the family needs food, and instead there is no childcare if the kids are 5 and up and if younger, they are often left in precarious childcare situations, rather than a nice $2,000 a month facility. There is occasionally government aid or charitable aid — public schools try to have low cost childcare programs, Boys and Girls Club of America, etc. If it weren’t for Objectivist Jerky, we would have more of that and our society would be in better shape. But the majority of Americans care about nobody else’s kids but their own, unfortunately, and refuse to understand the long term benefits that would accrue from doing so. But the people who are complaining about childcare costs here aren’t in that situation where it’s a choice between food, rent and childcare. They are in a situation where they don’t want to change their lifestyle to achieve their goals. They don’t want to manage their money to come up with a financial plan. They tend to ignore retirement savings they’ve gotten and work benefits when they are complaining about their expenses. I’ll give you a totally inane example that is nowhere as serious as childcare issues, but is nonetheless pertinent. When my husband and I were about to have a baby, we were in the process of moving up from working class to middle class and had bought our first home, but babies are expensive. At that time, we spent about $40 a month on comic books. Just a little recreational expense, something we liked to do (he liked Spiderman, I liked Sandman.) It had started out being a lot smaller an expense, but prices went up, Marvel started doing these massive crossover things, etc. So we stopped buying comic books because it was too much money to spend on that expense when we had a kid. The people who are making these complaints about taxes don’t want to stop buying comic books. I’ve been at several different income levels in my life, and I have family who are in nearly every different tax bracket and situation in life, so I don’t do fantasy math. I realize you’re saying people will have different views about what they are willing to do, but that’s not the same thing as tax being a disincentive burden. If you cannot manage your finances but you have enough income to swing $300-800 for a tax accountant or at least a financial planner, go do that. I agree with John on that. Because it isn’t the government’s job to teach you how to do that. Kat, I made up the numbers in my example to keep the math simple. I agree that most people who mow lawns for pay are not taxed at a 40% marginal rate. But none of that changes my basic point: even the smallest tax creates a (correspondingly small) disincentive. I think our disagrement, if any, is semantic. You wrote that “[taxes] don’t create a disincentive. It’s just that some people don’t like it more than others.” That’s what a disincentive is: something you don’t like. Most of us aren’t anywhere close to the margin when it comes to deciding whether or not we work or not. But plenty of people are, whether its deciding whether or not to take a second job, work an extra shift or whether to retire at 64 vs. 65. And when we make that decision we will weigh the benefits of that additional work (gross pay, benefits, job satisfaction) against the burdens (taxes, expenses (child care, wardrobe, commuting costs, loss of leisure time). Each of those burdens is a disensentive. For the record I am in favor of progressive taxation, believe we should return to Clinton era tax rates, and think most high-earners don’t so a good job of managing their finances. @Blue Valentine: You write: “There is nothing special about taxes, other “but-for” costs create the same disincentive (e.g., gas prices, the babysitter I have to hire).” I’m not so sure about that. I think taxes are importantly different in a couple of ways. One, everybody pays them. This changes the absolute prices, but not the relative ones, and I think people are much more sensitive to relative prices. Two, because they’re not optional, they tend to fade into the background. E.g., people tend to talk about gross salaries or gross prices, not net of tax. So I suspect taxes are special in that they don’t have as much behavior-distorting influence as other factors that affect pricing. I agree – we were arguing about the semantics. The CBO testified today in hearings that extending the tax cuts for the 2%, even for two years, would be disasterous. It will run up the debt and not stimulate enough jobs or consumer spending. Which we already knew from the oughts, but the numbers don’t get any prettier. 300baud: I agree that relative prices matter more when choosing between job A or job B and most income is taxed the same regardless of which job it is for. But you only pay income tax if and the extent that you earn income. Leisure time is not taxed. So if the decision is between working and not working then income taxes do effect the relative price difference between the two choices. I think some of you under estimate the amount of budgetary wiggle room present in the upper income brackets. Most of these people are locked into large mortgages that are not discretionary. In the current housing market, selling your house and downgrading, even if you want to, is a year long endeavor. Similarly, private school costs generally run yearly contracts with no real exit clause. When you are making $50K/year an important part of your budget is miscellaneous expenses (comic books and whatnot) however when you are making $200K you tend to be dominated by a couple large more or less fixed things. Another important thing to note is that the upper income brackets tend to receive a fair amount of their compensation in the form of optional year end bonus/andor equity grants. So how much money you make in given year is hard to predict and already feeling downward pressure from the economy. So, if you were to theoretically raise my tax rate 1% (not saying that is going to happen mind you) and require me to squeeze out another $3K/year from somewhere, in a year when my real income is already down 15% from bonus shrinkage, that is real pressure which is not easy to compensate for by not ordering as much pizza and things. For me, my day to day budget is dominated by 5 things (roughly 70% of our takehome) *Mortgage *School *Property Tax *Earthquake and other insurance *Student loan repayment Certainly this is all solveable. The fix is usually NOT to target the nickle and dime stuff but to go for the big ticket items. For us, that means moving to the burbs in a place where you can get into a good public school. THAT is the type of thinking that is going on in peoples heads. It’s a big project though, not something that you can do overnight by tightening belts. [pointless late shot by someone with nothing substantive to contribute deleted -- JS] unholyguy: In other words, people who have big incomes live beyond their means, have no financial planning skills, and then get stuck when the economy goes in the dumpster because the really wealthy people got unpaid for tax cuts and corporate bailouts, (and non-performance bonuses that have to be paid to them contractually, meaning $50K salary employees below them get laid off.) And they don’t know how to handle tax deductions — the considerable ones that are available for upper income tax brackets that are not available for lower ones — to reduce their tax in other ways. Yeah, we got that. Which is why it’s a good idea for the well off to go to an accountant or financial planner who will tell them not to buy the big house and do the insanely costly private school and help them actually be able to survive an economic downturn and manage their taxes better. Essentially, we all could benefit from the $250K-600K people, and even the $100K-250K people whose taxes are not going up (and who are likely getting a tax cut,) getting a lot smarter financially. But as for belt tightening on the little things, yes, it is entirely possible to do it and come up with that $3K in tax money for the year (which you can have withdrawn in small amounts from your paycheck over the course of the year,) while you’re working on the longer term project of downsizing the big ticket items. Pizza, videos, art classes, coffee, groceries, sports equipment, clothes, magazine subscriptions — people who are living at those incomes have a lot of discretionary income, but they tend to view a lot of expenses as “necessary” that aren’t necessary at all when there’s a budget crunch. Trade in the expensive car and get a cheaper used one with a lower car payment. I’ve done the cutting back thing at $50K and I’ve done it at $100K, and it’s a lot easier at $100K. It’s called savings. It’s called a budget. It’s called being an adult. Meanwhile, my relative has no discretionary income. Everything in her life is a big ticket item — rent, food, medicine, clothes for her fast growing kids. She doesn’t have retirement savings. There are no bonuses or stock options she can exploit. And gas costs the same for her as it does for people with twenty times her income. (But she’s lucky, she’s got a roof and a truck.) This is about being realistic. Realistically, we could not afford the tax cuts when Bush handed them out, and we cannot afford to continue them. Your taxes will return to what they were in the 1990′s. Your taxable income (not gross income) will remain at the same rate up to $250K. Beyond that, it goes up a tiny bit. If you lived beyond your income means and got yourself in debt, then you have to get yourself out of debt, same as someone on a lower income, and you have far more resources and options for doing so. Including being able to afford going to a tax accountant. This well I’m trapped and there’s nothing I can do about it, poor me attitude is juvenile and isn’t going to wash. It’s like a teenager with a credit card who just learned that you actually have to pay the credit card company back and his parents aren’t going to do it for him. We’re trying to save the whole country here, and if we do, your house values rise again. So cowboy up. :) Most of the people I know who use accountants say that the money they spend on them is mostly made up on tax breaks/cuts/whatever that the accountant finds that they would have missed. That and the person in your corner come an audit warm fuzzies and few of them begrudge the cost at all. Also, isn’t tax prep itself tax deductible? @kat, not complaining about taxes, don’t mind paying more taxes. Even though most of the people that are saying “pony up” actually suck more down in federal largess every year then they pay. So it’s easy thing to say i suppose since it amounts to “give me more of your money”. I’m not trapped, there is plenty i could do about it, it just takes some time to maneuver. There are no magic “tax breaks for the upper income crowd” that is a total crock. I don’t think I spend money on ANYTHING in your discretionary list. Why would I? I have a budget, I have savings, I am actually not in any immediate trouble. Hell one of the reasons I have such a big mortgage is it’s still a smart investment long haul. However, I got this way by being good at anticipating and taking proactive action, and by maintaining a buffer between my expenses and my income. When something starts to eat into that buffer, I don’t passively wait around for things to get criitical. My spidey sense is tingling, as a whole lot of desperate clueless Americans continue to look for an easy wait out of an unsustainable lifestyle. I will be shocked if I am not paying 10% more in taxes before this whole thing finishes. The $250K set is not some magic pinata that the rest of the country can keep smacking and watching the money fall out to pay for this mess we have all gotten ourselves into together. Like I said my simplest way to cut down on my expenditure is to move to a burb with a good public school. That would do a million times more for me then any belt tightening, it would save my 10% of my take home income a year. That is a lotta pizza. It would effectively transfer a large chunk of my expenses onto the state. It would easily offset any tax the overall value of any increase I would have to pay since the government would be shelling out more to educate my kid then I would be paying in extra taxes. Don’t really want to move to a burb, think burbs are a big part of the reason why we are such a resourcing guzzling mess of a society, but if I am incented enogh, I will. “Why is it that the people freaking out the most about taxes on the rich are the ones who suspect they are going to end up doing most of the paying on this shiny new deficit?” The answer is in the question: Because they can do math. They can divide one trillion by 4 million (-: unholy: “Even though most of the people that are saying “pony up” actually suck more down in federal largess every year then they pay. So it’s easy thing to say i suppose since it amounts to “give me more of your money”. Well thanks for insulting me, but as I see it, you took my money, not the other way around, and you use way more government resources than I do, and natural resources in general, even if you aren’t using the public schools. You got a tax cut for the 2% wealthiest that we couldn’t afford, that was designed specifically to expire in ten years so that Bush did not have to justify with the CBO how to pay for it. That tax break you got meant that I and others below you had to make up that revenue lost — we paid more, especially when Bush went into Iraq. Your tax break helped run up the deficit which has helped put the economy into the toilet and lost a lot of middle class people their jobs. So saying less well off people are trying to take your money — you’re just not going to get a lot of sympathy. The 2%, including the people who make $300K, cost us a lot for their tax break, whether or not they use public schools. Look, everyone is hurting. Some people lost jobs. Some people are now stuck with houses that lost worth, which drives up their debt. But these tax cuts the top 2% got are coming to their mandated end, and we cannot afford to continue them. You aren’t getting a tax increase. You’re just no longer getting a tax break — a deduction of your taxes that you got special for being in the top 2% (i.e. a tax break that others below you don’t get.) But you still have your large mortgage interest that you can deduct. If you have investments, your capital gains are taxed at a lesser rate. That’s another tax break that wealthier people get on their income. There are a lot more, which you may find if you go to an accountant. Some better news: – bailout And some logical news: I have a reasonably large income(97th percentile in 2009, 95th for 2010 due to across the board salary and bonus reductions), but a very simple tax situation. Simple enough to handle through TurboTax. However I have had a number of years (working abroad, multi-country, multi state taxes, LLC, etc) where I have used accountants, and felt happier doing so. I also trusted my accountant when she suggested to me that she was not really required the following year so long as things didn’t change (as my tax situation was greatly simplified: 1 house. 1 family. 1 state. 1 salary. No weird investments [independence rules makes them a pain in the ass]). Note that I do recognize that the responsibility for filing correctly is mine – not my old accountant’s! I’ll use an accountant again whenever my situation (personal, or externally imposed) changes. Oh, and to unholy guy@195: moving to another district so your kids can attend public school does not greatly increase the burden on the state (the marginal cost of another child in class is small). The revenue you bring to the area as a higher income, higher tax citizen means that area will benefit from your presence – as will your kids. It would also leave you more money to lavish on local enterprises, charities, and your personal future hedging, instead of simply spending it on already wealthy private schools. “This is a little like saying since you can buy the same surgical equipment as a doctor, you can do your own surgery out of a book. You’re paying for the expertise, not the tools.” Or: That sword you bought in the dealer room does not make you a ninja. The tax tables that you used on Form 1040 this April 15, 2010 expire this December, and on April 15, 2011 the tax tables you will use revert back in time to just before the Bush tax cuts. Not rocket science. Rocket science is easier. (Get a CPA). @kat again, I am not complaining about the tax hike. I don’t mind it. I’m just saying that if the country thinks they are getting out of this mess by taxing the upper incomes only, they are smoking crack. As far as resource utilization, I live in a dense city, take public transportation to work, don’t have to run either heat or air conditioning, don’t drive anywhere really, barely have a car. City living is about the lowest resource utilization lifestyle, suburban is about the highest. Here is a book. There is no way anyone can rationalize that the return you get from your tax dollars scales with your tax rate. The top 3% pay half the total taxes. It’s kind of the whole point of a progressive income tax that it doesn’t net out. @198: Tony, I already pay for public schools through property tax, I just don’t actually benefit from them at all. Just because my kid does not attend, I still get taxed. It’s not like they give it back. And you are wrong about incremental cost, state of California pays $8900/student. Also property taxes in the state of California are collected locally but distributed at the state level, so local communities would not benefit from me moving there. unholy guy: “I am not complaining about the tax hike.” Again, it isn’t a tax hike. It is the expiration of a tax break for that top 3%. “I’m just saying that if the country thinks they are getting out of this mess by taxing the upper incomes only, they are smoking crack.” I don’t think anyone has proposed that to be the case. This is about simply ending the tax break. And our economic situation is much more complicated than that. But the upper incomes are not the only ones taxed. The middle class are taxed as well. And because of the tax code and the greater ability of higher incomes to remove income from tax, they end up giving up more of their income to taxes, even if the sum of the revenue is smaller. And right now, they could use some tax relief or more and more of them are going to end up in the working class, sliding down the ladder, than they have already. And that isn’t going to be good for you either. “City living is about the lowest resource utilization lifestyle,” If you are counting the cost of one individual, then a suburban individual is more expensive than a city individual, and certainly takes up more space landwise. But there are millions of people in cities and so cities in aggregate are more expensive than suburbs. The sewer, sanitation, trash pick-up, road maintenance and repair, building maintenance and licensing, gas and electric, water treatment, pollution issues (big one,) police, counterterrorism and emergency services, public transport, railroad and shipping issues, erosion and soil issues, health control and pest control, park service, etc. for a city, all a huge drain. Electricity alone, really. It does make the suburbs cheaper to live in, or at least some of them. But that doesn’t really matter since incomes are spread out everywhere. Well off people use more resources, at least according to figures they give us such as California lawn watering. That doesn’t mean that all people who have high incomes are greedy pigs. But again in aggregate, it does mean that high income folk can’t claim their tax bracket is less of a drain on resources. Actually my last post is on point. If we didn’t have the bloated tax code we would not need accountants. You made the point after a household’s income reaches certain level of complexity where the 1040A/EZ no longer meets their needs get an accountant. My point was if we have a flat tax no one would need an accountant:) But I see your now drowning in other comments so you may of just been getting through the deluge and didn’t have time for a thoughtful response. Easier to say sorry but your off topic moving on. JR Kincaid: “My point was if we have a flat tax no one would need an accountant.” The goal is removing income from being taxable income. Even if you had a flat tax, that goal would still be in place, requiring accountants to help reduce the size of your total income declared. You would still have very wealthy people moving money into offshore accounts, tax free investments and paper companies so that it couldn’t be taxed at all. That needs accountants too. If we had a flat tax, revenues would flatline, more people would be laid off, wealthy people would have a massive, permanent tax break and the middle class and the working class would have a large, permanent tax increase, and we’d be further along to a banana republic. The math on a flat tax is quite clear — it’s only useful for the wealthy. And no, I’m not interested in you trying to convince me otherwise because I’ve seen the statistics on it. Regardless, a progressive tax system is what we have, and Scalzi said that is what this thread is supposed to focus on, not alternate tax systems. @Kat actually no. I was referring to my first post which mentioned Fair Tax. Fair tax is a sales tax that is a flat tax. More you buy the more your taxed. No accountants required for the tax payer. Fair Tax people have proven mathematically that the individual tax burden would be less not more because we could fire all the people who are making sure the convoluted system is somewhat enforced. (leaner IRS) Because there is less waste spent on enforcement the effective tax rate could be a smaller percentage for the same budget. I think this is something that could be bipartisan. It is smaller government and lifts the April 15th burden off the tax payers plate. Business all do state sale taxes so the infrastructure is already in place for collecting it. There are less businesses than people so it would be easier to enforce. You can put in the same incentives and make it progressive just like the current system through tax holidays and special programs for the poor who can’t afford to pay etc. All that would involve no where the number of accountants we need today. First off, not interested in fantasy math. Second, again, Scalzi said he’s not interested in discussion of alternative tax systems. Third, yes, I understand you were just clarifying. It’s been three threads of people feeling put upon and persecuted, and no reasonable argument seems to dissuade that notion, so I’m done. Excellent song on BBC FRIDAY NIGHT COMEDY (which I get via podcast) a few months ago about how we’ve already got a great example of a no-taxation, no-government paradise whose example we can learn from: Somalia. It seems like if you’re going to write an article about ignorance, and then make a list of people you think are ignorant (i.e. “people who don’t understand taxes are as stupid as people who think the Bible said…”) you’d maybe want to do at least a rudimentary Google search on the items in your list of things you think are ignorant. Darwin said in The Descent of Man, and Selection in Relation to Sex that we are descended from ‘Old World monkeys’. Specifically, he said: “The Simiadae then branched off into two great stems, the New World and Old World Monkeys; and from the latter at a remote period, Man, the wonder and glory of the universe, proceeded.” It sounds like you were at a dinner party where someone was clapping himself on the back for being smarter than everyone else and listing things people were commonly wrong about (like, the quote that you correctly stated is not in the Bible) and you just took for granted everything he said was true and repeated it here. You are definitely right, though, ignorance is easily correctable. Michael Davis:. This is the thread that will not die. I quit commenting on it a while back, but when I see silliness like this, I’m almost forced to say something. @193 Kat Goodwin “Realistically, we could not afford the tax cuts when Bush handed them out, and we cannot afford to continue them. ” Each time there’s been a tax rate cut, the amount of revenue coming into the Federal Treasury increases. It’s simple — the cut stimulates economic activity, and although the Feds are be getting a smaller share of the pie, the pie is bigger so they actually get more pie. It’s not a revenue problem — it’s a spending problem. And saying that what’s coming is not a tax increase is bullshit semantics. The tax rates are going up, no matter how you want to spin it, and that’s going to depress the economy. The phraseology of “it’s not really a tax hike” has something to do with using makeup to disguise porcine qualities. As for the progressive tax system, its biggest downfall is the fact that there are people who don’t pay into the system. It’s easy to pass tax rate increases when you’re not the one paying the freight. At some point, the people paying the freight pick up their marbles and go home, since it’s not worth their time and effort any more. And when they reach the “why bother” point, businesses cut back on employees and purchases, if they don’t close entirely. The folks closing up shop aren’t hurt — they’ve got theirs. The only ones hurt are the people that lose their jobs or the business that was created by those that packed it in. I’m not going to get into JR’s taxtopia except to say that I once asked Mythagomom, who is a tax attorney and CPA, whether she wasn’t worried about job security if a flat tax was passed. When she stopped laughing long enough to breathe, she explained that there is no such thing as a real ‘flat tax’ because a) you have to define what’s taxed, b) people will find ways to try and evade those taxes, thus reducing revenue and enforcement and c) the very reason that we have a complex tax code is that people want to play around with exceptions and deductions, and there’s no reason at all to think that people (including the corporate kind) will suddenly change. Dave in Georgia: Again, I’m not interested in fantasy math and voodoo economics. The CBO has explained ad nauseum that the tax cuts are not stimulating the economy, that they’ve had a large impact on running up the deficit, and that wealthy people and corporations are saving and sitting on cash when they get tax breaks and loans, not putting it into the economy and not providing jobs. During the Bush years with his tax cuts, there was zero job growth and practically no growth in income. They are calling it the “lost decade.” Not extending the tax cuts for the 2% will help with the deficit, while trying to make them permanent will run it up further, exacerbating the conditions that led to the Great Recession in the first place. Your poor people should pay more while wealthy people pay less line is basic trickle down economics which has been proven over the last thirty years to be absolute bunk. And re your going Galt threat, I still think the phrase Objectivist Jerky is pertinent. Seriously, I agree with other posters earlier, if you want to leave, leave as soon as you like. The company store model of capitalism doesn’t help us; it just makes us a banana republic. But re the actual topic of this thread that will not die, here’s a tax shelter for the well-off: Kat, life insurance is primarily used as a way to get around estate taxes, not income tax. They really only help you when you die, it’s a way to leave more money to your kids. I agree that there are problems with some of loopholes in the death tax, but it doesn’t really have a lot to do with our income tax discussions Sure it does. It has to do with the financial planning that you do with a tax accountant or financial planner to minimize your overall taxes. It’s part of a person’s overall investment and tax strategy. I don’t think something you can only cash in after you die counts as an investment strategy. Or at least not a very good one. Any plan that has you dying in order to reap the benefit is not a good plan (-: Most “permananent” or “whole” life insurance policies will allow you to borrow against your cash balance before you die. Interest accrues and further reduces your cash balance, but so long as your cash balance is sufficient there is no requirement to repay the loan before you die. So certain life insurance policies can indeed be a good way to allow saving to compound tax free for your own use, not just that of your heirs (think of it like a Roth IRA). unholyguy – “The top 3% pay half the total taxes.” And the top 1% pay 40% of the total income taxes; coincidentally, they own 42% of the total wealth. What’s your point? His point is that he doesn’t want to pay for others, a philosophy neatly illustrated by Robert Reich on Countdown: Short-term Social Darwinism lives and is being employed oddly enough by people who mostly believe in creationism. It goes along with voodoo economics well. @212 Kat The reason EVERYONE needs to pay something in terms of income taxes (if you insist on having the damned things) is that otherwise they have no skin in the game. It’s easy to raise taxes if you’re not the ones paying ‘em. To paraphrase, once people realize they can vote themselves money out of the public treasury, the game is over. And income redistribution on this scale more than qualifies. And as for the income tax rates, I don’t have the figures on hand, but I recall the top 1% paid something like 35% percent of the income tax collected while making about 16% of the income. Remember, the tax isn’t on wealth — it’s on that year’s income. And who said I was Going Galt? I can’t afford to. I have a mortgage to pay, etc. I’m trapped in this cage with the rest of you while the political monkeys fling poo at us. I’m just trying to get through the day and pay the bills without getting ripped off any worse than I already am and keep the really big balls of poo from hitting my family. That’s not my point Kat. My point is that if you think rich people use an amount of social services commiserate with their tax rates you are kidding yourself. Social Services are for the most part funded by rich people and consumed by middle class and lower. That is the nature and design of a progressive tax. That is the ENTIRE point of it. You can agree or disagree with whether a progressive tax is the right method (I personally agree with progressive tax) but it is what it is. There is a break even point where everyone below that rate gets more out of the government then they pay in. In theory, anyone below that point is incented to see taxes increase since they get more out of the increase then they pay. Dave: You’re a libertarian spouting Objectivist philosophy, but you can’t leave for your island. Which means you’re screwed when corporations support that same philosophy and the politicians who espouse it and you vote for them. When poo gets thrown at other families, it gets thrown on yours because we are dependent on each other. It’s what is known in political science as a collective action problem. You can believe that or not, as you please. Unholy Guy: Like I said, I have relatives in all areas of income. I know what they use in government (not just social) services — and I know what they put back into the economy, not just in taxes but in spending and work productivity. Wealthy people take up more resources, which has made it a lot harder for the middle class. That doesn’t meant that the poor are not a burden — they are a large burden — but that doesn’t mean the wealthy are getting squeezed. Some wealthy people also bitch about freeloaders because of short term costs instead of seeing it as a collective action issue requiring long term investment in people. I also agree with a progressive tax. Put simply, if you have money, that’s your problem, not the poor people’s, not the government’s. Get an accountant and manage it, manage your tax liability. If you want to argue that going back to tax rates for the top 2% so that they have to pay 3% more will collapse the economy and lay off lots of people, well, 50 years of cutting those top tax rates under the same threat so that they are now less than half of what they were in the 1960′s failed to stop the wealthy and corporations from collapsing the economy and laying off lots of people. So I’m willing to risk it. :) Kat, what you’re talking about is also known as the Tragedy of the Commons. In a situation where there is collective ownership of something, it’s in the best interest of those sharing the resources to grab all they can before they’re used up. An individual that owns a resource has it in their interest to preserve and care for something so they’ll continue to reap the benefits of that resource. When you’re dependent on seizing wealth from others, that’s when it becomes a race to the bottom. Hmmm, using resources. That assumes that it’s all a zero-sum game. Those with money have to purchase things, which requires others to make those things or provide those services. They in turn need to purchase….etc. etc. etc. Last I heard, this was called the economy. And you’re assuming that everyone is totally dominated by economics. There’s also this little thing called quality of life, where decisions are made on factors other than pure economics. Americans give a pantload to charities to help their fellow man for this very reason. I’ve got a couple of area charities I give to each year. Yeah, if you’ve got money, that’s your problem. You have to hire accountants to keep folks from stealing from you, and in your description the group that you first have to defend yourself from is — the government. Son of a gun. I agree with you on something. Kat you are hearing arguments from me that are not actually coming from me, they are coming from inside your own head. All those things you think I am arguing have never actually said, I am mostly saying the opposite. I am quite willing to pay my share and tighten my belt. Do you actually read what I am writing, or do you just read the first sentence and then kind of make up the rest? I’ve never said there is any problem with raising the upper end of the tax bracket, I have said over and over that I agree with that and support it. I actually think it doesn’t go far enough. 10% is probably better at this stage. The thing that concerns me, is that we, as a nation, are living outside our means with regards to government spending and taxation. The long term fix for this is probably not raising taxes on the upper %1, 2%, 3% or 10%. Those tax brackets simply cannot pay for all the government expenditure no matter who we raise taxes on, or for how much. Even if you took their entire paycheck we still could not balance our books. It’s that bad. The money simply isn’t there. However, we have a large segment of the population that consumes government services and is pretty resistant to any attempt to decrease them. Hint. This is not the rich people I am talking about. This is the middle and lower classes. Everyone is going to have to learn to belt tighten quite a bit. This includes all you people out there making $40K/year who mostly are not paying enough in taxes to even pay for the public school your kid goes to, much less all the rest of it. If you are looking at people like me to keep you in the style to which you have become accustomed, good luck with that. I’m not talking about “putting back into the economy” I don’t know what that means to you, and if you are talking about “contributing value to society” it’s a huge kettle of fish to untangle. The people making $40K have been tightening their belts for the last 30 years, which is why so many of them are slipping into the working class and working two jobs, and why the working class can’t get ahead to make it into the middle class. And they’ve been doing it because the government has gone along with the demands of the 1% and the bigger corporations — tax cuts on the top brackets, deregulation, increases in spending on corporate pork and defense spending and continual spending cuts in social services and education, resulting in the trashing of our public school systems. The government did all that because of threats and influence from that top 1%, and by doing it, allowed the wealthy and corporations to close factories, move jobs overseas, lay people off to drive up stock prices and their compensation, and get corporate bailouts every ten years — the very things that they threatened. The rich spend, but they don’t spend enough. They sit on cash and save it, parking it in offshore accounts and tax free investments. It’s the middle class whose spending keeps the economy going. The wealthy also give less of their income to charity. It’s again the middle class who give more to charity, which is why the charities are also hurting. The recent poll of small businesses was not that they were worried about taxes, regulations or even employee benefits, but about increasing consumer demand and improving the economy — consumer demand comes mainly from the middle class, not the wealthy. But the wealthy have crippled the middle class. They are wealthier than they’ve ever been, while the middle class has shrunk. And that stops the heart of the economy. That gets you a banana republic. You’ve bought into the idea that it’s government that is the problem, and that it’s social services spending that is the main problem. Statistically, this is dead wrong. The stimulus worked, even though it was too small — and it went primarily to the middle class and working class, which corporations don’t like, and so there won’t be any more. Which is why they are pouring money into Meg Whitman’s and other libertarian Republican campaigns. And there will be more corporate welfare. They don’t give a hoot about the deficit. You’re being conned. Without a functioning middle class, there will be a greater increase on the tax burden on the wealthy and a government that goes greater into debt. To stop the cycle, the wealthy need to pay a little bit more now, to give up a tax break, to invest in the middle class and the working class and the poor, to practice competent, long-term strategic capitalism. And that means government aid, not dumping them further in the soup and claiming that the poor and middle class are just lazy and if the very wealthy all buy a limo each with their tax savings that it will save the economy. Groceries will save the economy. I’m sorry, unholy guy, but people in your tax bracket have squeezed all the blood you’re going to get out of the middle class. If you’re expecting them to step up and do their share, you’re going to be disappointed, because you already took their share and left them unemployed to boot. The wealthy cannot necessarily save the economy, but if they stopped killing it, that would at least help. However, the very wealthy and corporations have little incentive to stop as the U.S. market is just one part of the global market. The middle class and the lower classes are resistant to having decreases in government aid — because there’s been less and less of it over thirty years and because they are desperate. And that’s because the wealthy sunk them, and don’t care if the younger set produces skilled workers or not. So the middle class is further crippled from stepping up. And that’s had a negative impact on the wealthy as well, that’s soured their economy, for which they inaccurately blame Main Street, not Wall Street. The data is not something that some of the 2% wants to hear, apparently. And that’s unfortunately the part that is largely in charge. So even if we come out of this economic downfall, we’re going to have the same problem — corporations raiding at the top, then blaming the bottom that they’ve made bigger, claiming that if they just get even more breaks, that money will trickle down, really it will. We do need to trim spending, waste, earmarks. We need to invest in decaying infrastructure and education and healthcare. And luckily, the TARP bailout did stem the bleeding and we may even make money off of it, so we’re good until the next Wall Street disaster where they make more record profits and the middle class gets smaller. But I don’t know if what needs to be done is going to get done, because apparently I am a big crybaby over-entitled freeloader while you are a poor top tax bracket far more valuable person who can’t afford me. :) You know what, just ignore what I just put up. I’m tired of going ring around the rosie about it. Kat, these things are not happening because of social services and government spending. fundamentally, i believe similar to you that the decline of the middle class in the US, and the US economy in general is due primarily to globalization and the loss of middle class jobs, especially in manufacturing sector. No jobs mean a crappy economy. People can argue whether or not it is an inevitable thing, coming from the shift to a global economy, but our government is certainly not doing a lot to slow it down or stop it. Where you are making your error is you are no aiming your anger high enough. The global power elite that make these decisions are so far above my income bracket that they basically live on another planet. They don’t hold jobs and they don’t pay income tax. Not to anyone, ever. Try to make them, they switch countries. Everyone has tightened their belts it is true. Everyone is going to have to tighten them that much more. There is no way around it, because we as a country have gotten poorer and are spending way more money then we have. It’s not just entitlements, or tax rates, though getting rid of all these stupid wars would help. Even if we were spending efficiently, we cannot afford what we used to be able to afford. There simply is not enough money, we need to start living within our budget. unholy guy: “Even if we were spending efficiently, we cannot afford what we used to be able to afford. There simply is not enough money, we need to start living within our budget.” Which is why extending the tax cut for the top 2% is not a good idea. It’s not living within our budget. It added to the deficit and it would continue to add to the deficit. We cannot afford it. And what you don’t understand is that I’m not angry. I want the U.S. to have as many millionaires as possible; I do not resent their wealth, or the under millionaires’ wealth, and I am well aware of the very elite — I’ve been talking about them, the ones who can go to their island. I have said that I’m for making more tax brackets at the top 2%. But I am exasperated with people who insist that they should keep having their 3% tax break, which they shouldn’t have gotten in the first place, while at the same time insisting that the rungs below them –98% of Americans — are all pikers who need to pay for the 2%’s pain; who disavow any involvement in how we got in this fix while working for the companies who largely did it and electing trickle down politicians who put the policies in place. Yes, we cannot sandbag the very elite and their islands or stem their political influence. Yes, we understand that the lower rungs of the upper 2% have some problems too. But you keep talking about how the lower brackets must deal with austerity as if they haven’t already been doing that. You keep saying that they have to tighten their belts as if they have a choice whether to do it or not. They don’t have a choice. They’ve never had a choice. But the upper middle class has some choice. And those in the lower part of the 2% have quite a few choices and options. And all people really want is for you to acknowledge that instead of just complaining about your mortgage. They don’t expect you to pay for everything. (That’s China’s job.) But a lot of you — Prof Henderson, etc. — are acting like Mr. Potter in the film It’s a Wonderful Life. (The fact that you keep using the term entitlements, for instance.) A portion of the 2% are being melodramatic about a slight tax correction at a time when we really can’t afford melodrama. And if that keeps up, then our kids are going to fall too far behind to ever catch up, and we’ll have a large underskilled labor pool, which would be great if we still had lots of factory jobs, but we don’t have them anymore. (That’s China’s job.) Just try to keep the self-pity in perspective, is all we’ve been saying for three bloody threads. (And I’ve been much nicer in saying it than Scalzi, frankly.) I agree. No one should be complaining about the end of the 3% tax break. Very few are, actually, and the ones that are seem hopeless disconnected from reality. One way or another it won’t really matter. The taxes will go up, the money raised will be a drop in a very large bucket. I disagree about the hopelessness of reining in the ultra rich, I think it’s a difficult task but not a hopeless one. About the belt tightening though. I have not seen any evidence that the government is spending less on anything, I think we are just borrowing more. I don’t think there have been any widespread cutbacks on government spending in the US, all I hear is a lot of talk, everyone wanting less taxes, from the tea party to the millionaires. When and if people talk about cutting spending or raising taxes, it is always NIMBY cuts or hikes for others. Every segment of the population has some plan for some other segment to assume the burden. No one wants to give up anything. Everyone wants someone else to pay. That kind of thinking has about run it’s inevitable course I think. Gotta pay the piper eventually. There have been spending cuts; it’s just the media doesn’t find them as much fun to report. And the government aid has to keep flowing because it’s working and because the corporations are either sitting on their cash or using it to buy back their stock instead of hire people. We can’t keep cutting education. We have to make more defense cuts, but it’s tricky and we don’t want to screw veterans. Obama can’t magically erase all the debt Bush piled up. But the TARP stuff is now turning around, there are anemic signs of improvement. We’re going to be limping for a long while and in debt and with high unemployment that strains social programs. But if we don’t invest in people, we’ll sink deeper, and that means aid, healthcare, education spending, infrastructure repair — all the bugaboos a third of the country hates. So it’s going to be a slog. Which means it is even more important, if you do have a decent sized income, to see an accountant and plan for the future, for emergencies that might occur, for taxes, for your heirs. That’s assuming we don’t all become Objectivist Jerky, of course.. Nice ad hominem. The fact is that your OP doesn’t say anything about “extant” — the statement that you used there is factually correct, not analogous to ignorance about the tax code. Good work otherwise. MK: “Nice ad hominem.” It is in fact a nice ad hominem, as Mr. Davis made a bad argument that was either rooted in ignorance or a desire simply to argue to argue. As I like to assume a minimum standard of knowledge on the part of my commenters, I assume he was just attempting to troll. In which case, an ad hominem is perfectly reasonable. “The fact is that your OP doesn’t say anything about ‘extant’” That’s because it doesn’t need to, if one actually a) understands what Darwin was asserting, b) isn’t attempting to troll by irrelevant nitpick. I don’t write to the lowest common denominator; as noted, I like to assume a minimum standard of knowledge. “….” LOL! Someone had a kid, and is shocked – shocked! – that paying someone else to be mom for most of the day, most days a week is so expensive that they’d be better off doing it themselves. Please explain to this person that a baby is not a fashion accessory. If you have a kid, then *that* is what you will be doing with the next 20 years of your life. Why no: as a matter of fact you *can’t* “have it all”, in fact it *is* a commitment at just about every level, and why no: you *don’t* have a “right” to ignore reality and just do as you please – not even if you *are* a white and moderately wealthy woman. If anyone took this kind of attitude to a pet, or a garden, we’d happily call them a fool. @223 “However, we have a large segment of the population that consumes government services and is pretty resistant to any attempt to decrease them.” Oh! I know the answer to this one! The military. Nearly half your taxes goes on the most bloated military the planet has ever seen, a military larger than the militaries of the next 4 or so *combined*. “This is not the rich people I am talking about. This is the middle and lower classes.” Fail. Most of that money goes to companies, some of which is kicked back as bribes (Sorry! “Campaign contributions”) and most of which goes into the pockets of the executive class. Not much I can add to this conversation at this point, so I’ll just say: “Objectivist Jerky”/”That’ll be a fine last thought for you as the starving remnants of the society of takers closes in with their flensing tools.” New favorite blogger: Found. And relatively reasonable, intelligent responses in the comments section, with sharp wit besides, and not too much foaming at the mouth and flying spittle from either side. Color me impressed. I’ll definitely be back. Considering that taxes are intrinsically unfair (in the sense that they cover the same number of police callouts whether I need them once in a lifetime or once per year, I pay the same towards roads even if I live on a farm and never drive on public roads, they pay for schooling for my kids whether I have one, or seven, or none and never got US schooling myself because I’m a foreigner: we all pay taxes to have a functioning society because the other option is that there’s no government and we all live in walled compounds which we defend ourselves, nobody paves roads outside their own land, there’s nobody to call if someone kills all your family, and you have to educate your own children or pay someone else to do it) I repeat, intrinsically unfair in that sense – taxes therefore have to be about doing the most good and the least harm. If that means people who struggle to buy enough food every week pay a bit less, and those dudes making a million dollars per year pay a few percent more in tax, too bad. Hey you millionaires – if you ever find yourself owning NOTHING beside the clothes you wear, you won’t have to starve to death on the streets – because of taxpayers. Would you deny others who are poor today the right to not starve to death on the streets? (Anybody still reading?) You actually don’t need to know anything about the tax code. All you need to do is write down your gross income, draw a horizontal line above it, and above that, write down the total tax you owe. Then solve. I use an online tax filing program that calls this magical item the “effective tax rate.”
http://whatever.scalzi.com/2010/09/26/tax-frenzies-and-how-to-hose-them-down/
CC-MAIN-2014-15
refinedweb
32,062
69.41
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Send the same reports as there are in "Reporting->Phone Calls Analysis" Daily via an email And I'd like to add domains to it (e.g. domain="[('date','=',time.strftime('%Y-%m-%d')),('state','=','done')]" ). I have tried making email templates using crm.phonecall.report.graph module, but with no success. Managed to do it using API and Python. You can make it run daily/weekly/montly by using cron.(Im running this in an Ubuntu server) References: Code: #!/usr/bin/env python # -*- coding: utf-8 -*- import time import datetime from datetime import date import xmlrpclib from xmlrpclib import ServerProxy import smtplib from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart SERVER = 'SERVERADDRESS' db = 'DBNAME' user_id = 'admin' password = 'password' #Connecting to the server server = ServerProxy(SERVER+'/xmlrpc/common') uid = server.login(db, user_id, password) server = ServerProxy(SERVER+'/xmlrpc/object') start = 'today' stop = 'tomorrow' #Example query #returns a dictionary or False query = server.execute_kw(db, uid, password, 'crm.phonecall', 'search_read', [[['date','>=',start],['date','<=',stop],['state','ilike','done']]], {'fields': ['user_id']}) #PARSE YOUR DATA msg = MIMEMultipart('alternative') COMMASPACE = ', ' people = ['person1@space.com','person2@otherplace.com'] me = 'me@me.me' msg['Subject'] = 'SUBJECT' msg['From'] = me msg['To'] = COMMASPACE.join(people) html = """\ <html> <head> <style></style> </head> <body>Parsed data from the query goes here. You can use CSS.</body> </html> """ part1 = MIMEText(html, 'html') msg.attach(part1) # Send the message via our own SMTP server, but don't include the # envelope header. s = smtplib.SMTP('SMTPADDRESS') s.starttls() s.login('me@me.me', 'password123') s.sendmail('me@me.me', people, msg.as_string()) s.quit() If you share your solution, it would benefit others. (And your 'solution' would be worth the correct answer!!). About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now Create an automated action (scheduler) that runs every day. Make it browse through the records and search for the reports you need (your date filtering). Send the e-mail with the send_mail function and simply attach the report to the e-mail. Could you elaborate a bit more? I've never done any automated actions, so a little guide would be awesome.
https://www.odoo.com/forum/help-1/question/send-the-same-reports-as-there-are-in-reporting-phone-calls-analysis-daily-via-an-email-99210
CC-MAIN-2017-51
refinedweb
400
52.26
CGI Developer's Guide Chapter 7 Testing and Debugging CGI CONTENTS Debugging CGI programs is sometimes a difficult task because they rely on different information from several different sources. There are several different ways you can test your CGI programs, both interactively over the Web and stand-alone using a debugger. Both of these approaches have different advantages and disadvantages. In this chapter, you learn some common debugging techniques using CGI scripts and common debuggers as tools. You then learn some very common CGI errors and solutions. Debugging Techniques There are two different approaches to testing and debugging CGI programs: testing the program over the Web server as a CGI program and testing it as a stand-alone program. Although you can open HTML and other files directly from a Web browser, you need to have a Web server running in order to test the results of a CGI program from a Web browser. If you already have a server from which you can test your CGI programs or if you set up a personal or experimental server for testing purposes, how can you debug your CGI programs? There are several steps you can take. First, see if your program works. If it doesn't and if you receive a server error message, your program did not execute correctly. If you do not receive a server error message but your output is incorrect, then there is most likely a problem either with one of your algorithms or with the expected data. There are several potential server error messages, the simplest being ones such as "file not found" (404). One of the most common server error messages when your CGI program is not working properly is "server error" (500), which means that your CGI program did not send an appropriate response to the server. The server always expects CGI headers (such as Content-Type) and usually some data; if the appropriate headers are not sent, then the server will return a 500 error. For example, the following program returns the error 500 because the header is invalid: #include <stdio.h> int main() { printf("Cotnent-Tpye: txet/plain\r\n\r\n"); printf("Hello, World!\n"); } If you check your server error logs, you are likely to find a message that says the headers are invalid. If you know your program should return the appropriate headers (that is, you have the proper print statements in the proper places), then your program has failed somewhere before the headers are sent. For example, the following C code seems to be a valid CGI program: #include <stdio.h> #include <string.h> int main() { char *name; strcpy(name,NULL); printf("Content-Type: text/plain\r\n\r\n"); printf("Hello, world!\n"); } This program will compile fine and the headers it prints are valid, but when you try to run it from the Web server, the server returns an error 500. The reason is clear in this contrived example: strcpy() produces a segmentation fault when you try to copy a NULL value to a string. Because the program crashes before the header is sent, the server never receives valid information and so must return an error 500. Removing the strcpy() line from the program fixes the problem. Another common browser message is Document contains no data. This message appears when a successful status code (200) and Content-Type are sent but no data is. If you know your program should print data following the header, you can infer that the problem lies between the header and body output. Consider the modified code: #include <stdio.h> #include <string.h> int main() { char *name; printf("Content-Type: text/plain\r\n\r\n"); strcpy(name,NULL); printf("Hello, world!\n"); } If you compile and run this program as a CGI, you will receive a Document contains no data message but no error. However, there is supposed to be data: "Hello, world!". Again, the error is clear: You cannot copy a NULL string to a variable. Because the program crashes after the header is printed, the body is never sent, and consequently, the browser thinks the document has no data. The error message helps you narrow down the location of the error and quickly identify the problem. With a compiled language such as C, server error 500 generally means that the program has crashed before the header has been sent. Any syntax errors in the code are caught at compile-time. However, because scripting languages such as Perl are compiled languages, you don't know whether there are syntax errors until you actually run the program. If there are syntax errors, then the program will crash immediately and once again, you will see the familiar error 500. For example: #!/usr/local/bin/perl pirnt "Content-Type: text/plain\n\n"; print "Hello, World!\n"; There is a typo in the first print statement, so the program will not run, and consequently, the server receives no headers and sends an error 500. If your server logs stderr to an error file, you can find exactly where the syntax errors are by checking the log. How can you debug your program if it runs correctly, does not crash, but returns the incorrect output? Normally, you could run your program through a debugger and watch the important variables to see exactly where your program is flawed. However, you cannot run the CGI program through a debugger if it is being run by the server. If you are testing your CGI program in this manner, you want to take advantage of the server and the browser to locate the error. The poor man's method of debugging is to include a lot of print statements throughout the code. Because everything printed to the stdout is sent to the browser, you can look at the values of various variables from your Web browser. For example, the following code is supposed to output the numbers 1 factorial (1), 2 factorial (2), and 3 factorial (6): #include <stdio.h> int main() { int product = 1; int i; printf("Content-Type: text/html\r\n\r\n"); printf("<html><head>\n"); printf("<title>1, 2, and 6</title>\n"); printf("</head>\n\n"); printf("<body>\n"); for (i=1; i<=3; i++) printf("<p>%d</p>\n",product*i); printf("</body></html>\n"); } When you compile and run this program as a CGI, you get 1, 2, and 3 as shown in Figure 7.1. Suppose for the moment that this is a vastly complex program and that you cannot for the life of you figure out why this code is not working properly. To give you more information and help you trace the problem, you could print the values of product and i at each stage of the loop. Adding the appropriate lines of code produces the output in Figure 7.2. Figure 7.1 : Output of buggy factorial program. Figure 7.2 : Output of buggy factorial program with debugging information. #include <stdio.h> int main() { int product = 1; int i; printf("Content-Type: text/html\r\n\r\n"); printf("<html><head>\n"); printf("<title>1, 2, and 6</title>\n"); printf("</head>\n\n"); printf("<body>\n"); for (i=1; i<=3; i++) { /* print product and i */ printf("<p>product = %d i = %d<br>\n",product,i); printf("%d</p>\n",product*i); } printf("</body></html>\n"); } With this additional information, you can see that the value of product is not updating each time; it remains 1 at each iteration. You can easily fix this bug and produce the correct output in Figure 7.3. Figure 7.3 : Output of correct factorial program. #include <stdio.h> int main() { int product = 1; int i; printf("Content-Type: text/html\r\n\r\n"); printf("<html><head>\n"); printf("<title>1, 2, and 6</title>\n"); printf("</head>\n\n"); printf("<body>\n"); for (i=1; i<=3; i++) { product = product * i; printf("<p>%d</p>\n",product); } printf("</body></html>\n"); } Although using print statements is a simple and workable solution, it can be an inconvenient one, especially if you use a compiled language such as C. Each time you are debugging the program or making a slight change, you need to add or remove print statements and recompile. It would be easier if you could just run the program directly from within a debugger. You could run the program from within a debugger if you could correctly simulate a CGI program from the command line. This is possible but difficult because of the many variables you need to set. There are several environment variables that the CGI program might or might not rely on. For example, if you are testing a CGI program from the command line that accepts form input, you need to at least set the environment variable REQUEST_METHOD so that your program knows where to get the information. You must also properly URL encode the input, a non-trivial matter if you use a lot of non-alphanumeric characters. There are two ways to address this problem. The first is a somewhat minimalist approach. Determine and set as many environment variables and other information as you need and then run the program. For example, if you are testing program.cgi and you know that you are using the GET method and that the input string is name=Eugene&age=21 you could do the following (from the UNIX csh shell with the gdb debugger): % setenv REQUEST_METHOD GET % setenv QUERY_STRING 'name=Eugene&age=21' % gdb program.cgi Because all of the necessary information is set, the debugger runs the program without any problems almost as if the program were running from a Web server. You could create more advanced implementations of this solution. For example, instead of setting each variable manually, you could write a wrapper script that sets all of the appropriate environment variables and the input and runs the program through the debugger. The second way to address the problem of simulating a CGI program from the command line is to actually run the program from the Web server and save the state information to a file. Then, when you are ready to debug, load the state file and use that information as the state information. Several CGI programming libraries have implemented features that save and load state information. Although this is a good solution for obtaining and testing CGI programs using the exact same information you would have under real Web conditions, it also requires modification of the code every time you save or load state information. This might not be a desirable task. Testing Forms The main difficulty in testing forms is testing CGI programs that accept and parse input. A CGI program that just sends some output to the Web server, possibly based on the value of one environment variable such as HTTP_ACCEPT, is very simple to test from the command line because you usually do not need to worry about presetting the appropriate variables. I have already listed a few different ways of setting the input so that your CGI program runs properly from the command line. These are fairly good general solutions for debugging your programs. One possible source of bugs is not knowing what type of input you are actually receiving. For example, suppose you wrote some code that parsed data from the following HTML form and returned the data in a different format: <html><head> <title>Form</title> </head> <body> <h1>Form</h1> <form action="/cgi-bin/poll.cgi" method=POST> <p>Name: <input name="name"></p> <p>Do you like (check all that apply):<br> <input type=checkboxCarrots?<br> <input type=checkboxCelery?<br> <input type=checkboxLettuce?</p> <input type=submit> </form> </body></html> Remember, if the user does not check any checkboxes, then none of that information is submitted to the CGI program. If you-the CGI programmer-forgot this and assumed that you would have a blank value for "vegetable" rather than no entry labeled "vegetable" at all, your CGI program might produce some surprising output. Because you did not properly predict what kind of input you would receive, you inadvertently introduced a bug in your program. Avoiding this situation means making sure the input looks as you expect it to look. You can use the program test.cgi in Listing 7.1 as a temporary CGI program for processing forms in order to see the exact format of the input. test.cgi simply lists the environment variables and values and information from the stdin if it exists. Listing 7.1. test.cgi. #!/usr/local/bin/perl print "Content-type: text/plain\n\n"; print "CGI Environment:\n\n"; foreach $env_var (keys %ENV) { print "$env_var = $ENV{$env_var}\n"; } if ($ENV{'CONTENT_LENGTH'}) { print "\nStandard Input:\n\n"; read(STDIN,$buffer,$ENV{'CONTENT_LENGTH'}); print $buffer; } Parrot: Echoing the Browser Request Although test.cgi displays the input parsed by the server, it does not return the exact request that the browser has sent. Sometimes, being able to see this low-level request can be useful. First, seeing how the browser communicates with the server is useful for learning purposes. Second, you can see the exact format of the request, look for variations in the input, and correct the appropriate bugs in your program. I wrote a program called parrot, listed in Listing 7.2, written in Perl for UNIX platforms. It is a Web server that simply takes the browser's request and echoes it back to the browser. Figure 7.4 shows the sample output from a request to parrot. Parrot is essentially a very small, very stupid Web server that can handle one connection at a time and just repeats what the browser says to it. In order to use the program, type parrot at the command line. You can optionally specify the port number for parrot by typing parrot n where n is the port number. If the machine already has an HTTP server running or if you're not the site administrator, it might be a good idea to pick a high port such as 8000 or 8080. To use it, you'd point your browser at (of course, you'd substitute a different number for 8000 if you picked a different port number). Figure 7.4 : The response from parrot. Listing 7.2. The parrot program. #!/usr/local/bin/perl $debug = 0; ### trap signals $SIG{'INT'} = 'buhbye'; $SIG{'TERM'} = 'buhbye'; $SIG{'KILL'} = 'buhbye'; ### define server variables ($port) = @ARGV; $port = 80 unless $port; $AF_INET = 2; $SOCK_STREAM = 1; if (-e "/ufsboot") { # Solaris; other OS's may also have this value $SOCK_STREAM = 2; } $SO_REUSEADDR = 0x04; $SOL_SOCKET = 0xffff; $sockaddr = 'S n a4 x8'; ($name, $aliases, $proto) = getprotobyname('tcp'); select(fake_handle); $| = 1; select(stdout); select(real_handle); $| = 1; select(stdout); ### listen for connection $this = pack($sockaddr, $AF_INET, $port, "\0\0\0\0"); socket(fake_handle, $AF_INET, $SOCK_STREAM, $proto) || die "socket: $!"; setsockopt(fake_handle, $SOL_SOCKET, $SO_REUSEADDR, pack("l",1)); bind(fake_handle,$this) || die "bind: $!"; listen(fake_handle,5) || die "listen: $!"; while (1) { @request = (); ($addr = accept (real_handle,fake_handle)) || die $!; ($af, $client_port, $inetaddr_e) = unpack($sockaddr, $addr); @inetaddr = unpack('C4',$inetaddr_e); $client_iname = gethostbyaddr($inetaddr_e,$AF_INET); $client_iname = join(".", @inetaddr) unless $client_iname; print "connection from $client_iname\n" unless (!$debug); # read first line $input = <real_handle>; $input =~ s/[\r\n]//g; push(@request,$input); $POST = 0; if ($input =~ /^POST/) { $POST = 1; } # read header $done = 0; $CONTENT_LENGTH = 0; while (($done == 0) && ($input = <real_handle>)) { $input =~ s/[\r\n]//g; if ($input =~ /^$/) { $done = 1; } elsif ($input =~ /^[Cc]ontent-[Ll]ength:/) { ($CONTENT_LENGTH = $input) =~ s/^[Cc]ontent-[Ll]ength: //; $CONTENT_LENGTH =~ s/[\r\n]//g; } push(@request,$input); } # read body if POST if ($POST) { read(real_handle,$buffer,$CONTENT_LENGTH); push(@request,split("\n",$buffer)); } &respond(@request); close(real_handle); } sub respond { local(@request) = @_; # HTTP headers print real_handle "HTTP/1.0 200 Transaction ok\r\n"; print real_handle "Server: Parrot\r\n"; print real_handle "Content-Type: text/plain\r\n\r\n"; # body foreach (@request) { print real_handle "$_\n"; } } sub buhbye { close(fake_handle); exit; } As an example of parrot's usefulness for CGI programming, I wanted to learn how to use Netscape's support for the HTML File Upload feature supported in its 2.0 browser (discussed in detail in Chapter 14, "Proprietary Extensions"). However, the RFC on File Upload was flexible, and I was interested specifically in how Netscape implemented it. Because Netscape did not document this feature well, I created a sample file upload form and had it connect to the parrot server. After submitting the file, parrot returned exactly what Netscape had submitted. After obtaining the format of the upload, I was able to write the scripts in Chapter 14 that correctly handled file upload. Common Errors There are several common errors people tend to make when programming CGI. A large percentage of the problems people generally have with CGI programming (other than a lack of conceptual understanding that this book hopefully addresses) falls under one of the categories described next. You should be familiar with all of these errors, their symptoms, and their solutions; they will save you a lot of time chasing after tiny mistakes. The most common mistake is not to send a proper CGI header. You need to have either a Content-Type or a Location CGI header, and you can send only one or the other but not both. Each line should technically end with a carriage return and a line feed (CRLF), although a line feed alone usually works. The headers and the body of the CGI response must be separated by a blank line. Assuming you use the proper header format, you also want to make sure you use the proper MIME type. If you are sending an image, make sure you send the proper MIME type for that image rather than text/html or some other wrong type. Finally, if you are using an nph script, the program must send an HTTP status header as well. HTTP/1.0 200 Ok Content-Type: text/plain Hello, World! One common problem especially pertinent to UNIX systems is making sure the server can run the scripts. You want to make sure first that the server recognizes the program as a CGI program, which means that it is either in a designated scripts directory (such as cgi-bin) or its extension is recognized as a CGI extension (that is, *.cgi). Second, the server must be able to run the script. Normally, this means that the program must be world-executable; if it is a script, it must be world-readable as well. Additionally, it means you must be familiar with how your server is configured. Always use complete pathnames when writing a CGI program. CGI programs can take advantage of the PATH environment variable if it is trying to run a program, but it is more secure and reliable to use the full pathname rather than rely on the environment variable. Additionally, you want to make sure data files that you open and close are referred to as a complete pathname rather than a relative pathname. There are situations in which you use paths relative to the document root rather than the complete path. For example, within HTML files, the path is always listed as relative to the document root. If your GIF file is located in /usr/local/etc/httpd/htdocs/images/pic.gif and your document root is usr/local/etc/httpd/htdocs/ you reference this picture as <img src="/images/pic.gif"> and not as <img src="/usr/local/etc/httpd/htdocs/pic.gif"> This latter tag will give you a broken image message. In general, use relative paths from within HTML files and use full paths for data files and other such input and output. Know what type of input to expect. Remember that certain form elements such as checkboxes have the unique quality that they only get passed to the server when they have been checked, and you need to make note of these quirks. Finally, if you're using an NCSA-style authentication for your Web server, you want to make sure you set the limitations on both GET and POST. There are many language-specific problems that are often useful to know, especially if you are using several different languages. C users should remember to compile the proper libraries when linking and to make sure your include files are in the proper place. Watch out for pointer code that could cause segmentation faults within the program. Finally, use the full pathname. Summary You can approach testing and debugging CGI programs from two perspectives: actually testing the programs over the Web and testing them from the command line. Both have different advantages and disadvantages. Testing your programs over the Web enables you to see whether your CGI program works properly under expected conditions given real input. On the other hand, it can be a difficult and sometimes inefficient process. Testing from the command line gives you greater flexibility to debug your programs thoroughly at the cost of testing your scripts using real input from a true Web environment. You can also learn a lot by determining the exact format and content of the input from the Web. Most CGI errors can be attributed to a few common errors. Before you spend a lot of time doing exhaustive testing and debugging, check to make sure you did not make one of the following mistakes: - Sent an improper CGI header. - Did not use complete pathnames, or did not properly differentiate between real pathnames and relative pathnames (to document root). - Did not compile your code properly (there are syntax or other errors). - Did not correctly predict the type of information you received. For example, a checkbox on a form does not guarantee that the CGI program receives any input related to that checkbox.
http://www.webbasedprogramming.com/CGI-Developers-Guide/ch7.htm
CC-MAIN-2022-05
refinedweb
3,629
51.99
In this article, I'll show the working prototype of the CLR extensions that provide an infrastructure for enforcing database-like data integrity constraints, such as Entity integrity, Domain integrity, Referential integrity, and User-defined integrity. The approach I describe is based on my previous article where I've introduced a set of new metadata tables - so-called Metamodel Tables (Alex Mikunov, "The Implementation of Model Constraints in .NET). The technical part of the article also makes use of the various techniques I've described in the upcoming MSDN article (September issue, 2003): ".NET internals. Rewrite MSIL code on the fly with the .NET Profiling API" () In the previous part we've mostly concentrated on the theoretical foundations of the NET Metadata extensions. In a nutshell, we've introduced a set of new metadata tables (called Metamodel Tables), which extend the existing CLR metadata. These tables are populated by querying associated meta models and constraint definitions (business rules) and describe various types of database-like constraints/rules such as field/method level constraints (FieldConstraint and MethodConstraint tables), referential integrity constraints (TypeConstraint table) and so on. Note that the proposed approach doesn't specify any particular format of the constraints description. It can be UML/OCL or an XML format. The only requirement is that these constraints have to be mapped properly to the metadata tables and MSIL (Microsoft Intermediate Language). First, let me briefly describe the basic ideas from the previous article and show how it all works in the case of the method constraints. (I also assume that the reader of this article is already familiar with the basic concepts of the CLR, such as Metadata, MSIL, JIT compilation, .NET profiling. You should look into these a bit before you continue with the article although I will briefly cover some of the CLR basics here.) Consider the following simple class C: // C# code using System; ... namespace SomeApplication { public class C { public int foo( int nNumber ) { // code goes here ... return nSomeValue } // foo() } // class C ... } Let the method C::foo() have one precondition for the input parameter in a form " 0 < nNumber < 56" and one postcondition for the return value: " nSomeValue > 4". Those of you who are familiar with Object Constraint Language can describe it like this: -- OCL code C::foo(nNumber : int): int pre : (nNumber > 0) and (nNumber < 56) post: result > 4 Assuming that the method foo() is coded as a metadata token 0x06000002 (i.e. stored in the 2nd row of the Method table) and tokens of the form 0x89XXXXXX are used to represent method constrains (which are stored in the MethodConstraint table) we would have the following metadata layout (Note that we've also added a new column to the Method table to point to the MethodConstraint table): Figure 1. Layout of the Method and MethodConstraints tables for the foo method That is, foo's constraints are coded by the metadata tokens 0x89000001 and 0x89000002, respectively, and each row has a proper value in the Relevant Virtual Address (RVA) column which points to the actual IL implementation within the image file. A general look of the Method-MethodConstraints relationship is shown here: Figure 2. Method and MethodConstraints relationships During JIT compilation the runtime encounters the C::foo's metadata token VAs of related MSIL implementation of pre- or postconditions and to add the corresponding IL to the method's body before it gets JIT compiled. In other words, if the original method has the following IL: // MSIL code C::foo(...) // before JIT compilation { method body //MSIL } the CLR will add pre- and postconditions to the method�s implementation as follows: // MSIL code C::foo(...) // before JIT compilation { // IL code for // if !(preconditions) throw an exception; method body // original IL code with replaced 'ret' opcodes // IL code for // if !(postconditions) throw an exception; } A generalization of this technique could use a generic function that gets called on the enter-function and on the exit-function events. and validate them ... the CorConstraintType flags will hr = ConstraintsChecker.CheckMethod ( ..., 0x06000002, ctPostCondition | ctInvariant ); } // C::foo The approach we've just described requires quite a bit of changes in the existing CLR architecture. First of all we have to modify metadata tables (add a new column to the Method table) and to add the new ones (MethodConstraint). It also requires changes in the Metadata API to allow compilers/design tools to emit additional metadata/constraints definitions. Secondly we have to change the execution engine and the CLR assembly/class loaders ("fusion".) We should also take care of the compatibility issues with the current version of .NET. It would be nice to find an intermediate approach that doesn't require many of the previously mentioned changes. In this article I'll describe a simple approach that is based on the .NET Profiling API and runtime MSIL code rewriting and allows us to avoid any changes in the existing CLR. I call this technique �.NET metadata extensions� or just �.NET extensions�. The basic idea of this approach can be outlined as follows. When the CLR loads a class and executes its method, the method's IL code is compiled to native instructions during the just-in-time (JIT) compilation process. The Profiling API provided as part of the CLR allows us to intercept this process. Before a method gets JIT-compiled we can modify its IL code. In the simplest scenario we can insert our customized prolog and epilog into the method's IL and give the resulting IL back to the JIT compiler. Depending on the application logic the newly generated IL could do some additional work before and after the original method's code is called. In our case, these prologs and epilogs (emitted by our profiler) are simply calls to the special managed DLL - CCCore.dll (I call it ".NET extension DLL"). In other words, for a given .NET module and its method (let�s say C::foo) the profiler instruments method�s IL by inserting some IL prolog which calls a special method implemented by CCCore.dll: public static int CCC::__CheckMethodDefOnEnter( int mdMethodDefToken, __arglist ) { // first parameter is a method�s metadata token // second parameter is a collection of the actual method�s // parameters (at the moment of the call). // checks method�s parameters based // on XML-encoded MethodConstraint table ... // and returns result } The first parameter is the method�s metadata token ( C::foo�s token), the second parameter is a collection of the actual method�s parameters (at the moment of the call). The __CheckMethodDefOnEnter does the parameter validation based on a special descriptor file, which, in fact, is an XML encoded representation of the MethodConstraint table (Consistency Checker Descriptor file - CCD file.) The profiler also inserts an epilog that calls another method implemented by CCCore.dll: public static int CCC::__CheckMethodDefOnExit( __arglist ) { // __arglist should have two parameters: // 1) the orig method's return value // 2) method token // checks return value based // on XML-encoded MethodConstraint table ... // and returns result } So, the overall picture looks like this. Before compilation to native code (class C, method foo) we have C::foo(...) // ( before JIT compilation) { method body //MSIL } The profiler makes the following changes: C::foo(...) //(before JIT compilation with profiler�s changes) { // to check method�s preconditions/invariants call [CCCore]CCC::__CheckMethodDefOnEnter( foo�s token, params ) method body // orig IL code with replaced 'ret' // to check postconditions/invariants call [CCCore]CCC::__CheckMethodDefOnExit( foo�s token, return value ) } As you can see all the validation logic is moved to the NET extension dll (CCCore.dll), which does the actual job by analyzing method's parameters and the corresponding CCD file (XML-encoded MethodConstraint table). See Figure 3 for details. Figure 3. Runtime IL code instrumentation and .NET extension The major advantage of this approach is that our CLR/Rotor extensions are outside the runtime code. Every time we make changes in the code we don�t have to rebuild the "Rotor" source code. After we�re done with our changes we can merge our code and the CLR � the profiler code will become a part of the runtime engine. The CCCore dll can be merged into mscorlib or can be a separate library. The XML encoded metadata tables will become CLR metadata. First of all, the approach I propose preservers the identity of the classes. Unlike many other techniques that use custom attributes, proxy assemblies, remote proxies (context bound objects), etc. we make our changes at runtime, only! No changes in the original source code whatsoever. So, it's all absolutely transparent to the client. Here's some picture that explains how I implement the IL rewriting (method instrumentation) and add a prolog and an epilog to the method. A given method foo having N (< 255) parameters + the "this": ReturnType C::foo ( C* this /*invisible param*/, type1 param1, type2 param2, type3 param3, ..., typeN paramN ) { IL method body } will be rewritten by the profiler like this: ReturnType C::foo ( C* this /*invisible param*/, type1 param1, type2 param2, type3 param3, ..., typeN paramN ) { // prolog >> // Arguments of the method are loaded on the stack in // order of their appearance // in the method signature, with the last signature param // being loaded last. // So for instance methods the "this" is always the first argument: // ---------- // | paramN | // | ... | // | param3 | // | param2 | // | param1 | // | this | <-- for instance methods, goes first ( slot 0 ) // ---------- ldc.i4 tkMethodDef // load C::foo's token ldarg 0 // load param0 on the stack ( _param0 ) ldarg 1 // load param1 on the stack ( _param1 ) ldarg 2 // load param2 on the stack ( _param2 ) ... // analyze params by calling CChecker call vararg int32 CCCore.CCC.__CheckMethodDefOnEnter( tkMethodDef, __arglist ) pop // remove CCCore.CCC.Check's result // prolog << orig method body with replaced "ret" opcode goes here // epilog >> dup // to copy method's ret value and avoid adding // new local vars!!! ldc.i4 tkMethodDef // load method's token // analyze params by calling CChecker call vararg int32 CCCore.CCC.__CheckMethodDefOnExit( __arglist ) // __arglist should have the orig method's return value (goes first!!!) // + method token pop // remove CCCore.CCC.Check's result ret // retun method's result // epilog << } Secondly, a module that gets "instrumented" doesn't have to be linked against the CCCore.dll DLL (".NET extension" dll). Look at the CC.IL module provided as an example in \Barracuda2\CChecker\IL folder to see that it doesn't refer to CCCore.dll. So, we "dynamically link" this dll at runtime. Finally, the method's parameters at the moment of the call can be XML-serialized. Thus, instead of CCD-like files (Consistency Checker Descriptors) we could use XML schemas/XPath expressions (=XPath assertions/rules, see SchemaTron assertion language for an example) to validate the input/output: public static int CCC:: __CheckMethodDefOnEnter ( int mdMethodDefToken, __arglist ) { // first parameter is a method�s metadata token // second parameter is a collection of the actual method�s // parameters (at the moment of the call). // serialize input as an XML (e.g. SOAP) // and validate it against a schema file or a set of XPath expression ... // and returns result } In any case it's a move toward a more standardized way of validation, which may also imply the creation of an infrastructure similar to the SoapExtension framework provided by the ASP.NET Web Services. The attached zip file includes two folders CChecker and CCCore. The first one contains the binary file which is a .NET profiler DLL (CChecker.DLL). The second folder contains a C# project implementing the .NET extension called CCCore.dll. The CC.IL example module is provided in \Barracuda2\CChecker\IL folder. To see how it all works together follow those steps: It'll show some output displaying various information about the method parameters and their validity. To turn off the .NET extension and to see the difference just run cc_off.bat. CCCore.dll uses the CC.exe.CCD.config file (Consistency Checker Descriptor file) to validate CC's methods. The Consistency Checker Descriptor file format is self-explanatory. We use the following XPath expression " /ccdescriptor/methods/method/@token" to get all the method tokens in the descriptor file. To get a method's constraints we use " /ccdescriptor/methods/method[@token='sometokenvalue']/parameters/parameter". 25 Aug 2003 - updated source download General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/dotnet/model_constraints_in_net2.aspx
crawl-002
refinedweb
2,025
54.42
We saw the use of the javax.xml.transform package and two of its subpackages in the output( ) method of Example 19-2. There it was used to perform an "identity transform," converting a DOM tree into the corresponding XML file. But transforming the format of an XML document is not the only purpose of these packages. They can also be used to transform XML content according to the rules of an XSL stylesheet. The code required to do this is remarkably simple; it's shown in Example 19-3. This example uses javax.xml.transform.stream to read files containing a source document and a stylesheet, and to write the output document to another file. JAXP can be even more flexible, however: the transform.dom and transform.sax subpackages allow the program to be rewritten to (for example) transform a document represented by a series of SAX parser events into a DOM tree, using a stylesheet read from a file. package je3.xml; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.stream.*; /** * Transforms an input document to an output document using an XSLT stylesheet. * Usage: java XSLTransform input stylesheet output **/ public class XSLTransform { public static void main(String[ ] args) throws TransformerException { // Set up streams for input, stylesheet, and output. // These do not have to come from or go to files. We can also use the // javax.xml.transform. {dom,sax} packages use DOM trees and streams of // SAX events as sources and sinks for documents and stylesheets. StreamSource input = new StreamSource(new File(args[0])); StreamSource stylesheet = new StreamSource(new File(args[1])); StreamResult output = new StreamResult(new File(args[2])); // Get a factory object, create a Transformer from it, and // transform the input document to the output document. TransformerFactory factory = TransformerFactory.newInstance( ); Transformer transformer = factory.newTransformer(stylesheet); transformer.transform(input, output); } } In order to use this example, you'll need an XSL stylesheet. A tutorial on XSL is beyond the scope of this chapter, but Example 19-4 shows one to get you started. This stylesheet is intended for processing the XML log files created by the java.util.logging package. For each <record> tag it encounters in the log file, it extracts the textual contents of the <sequence>, <date>, and <message> subtags, and combines them into a single line of output. This discards some of the log information, but shrinks and simplifies the log file, making it more human-readable. <?xml version="1.0"?> <xsl:stylesheet <xsl:template <xsl:value-of <xsl:text>: </xsl:text> <xsl:value-of <xsl:text>: </xsl:text> <xsl:value-of </xsl:template> </xsl:stylesheet>
http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-19-sect-3.html
CC-MAIN-2018-43
refinedweb
435
56.66
In the C syntax highlight mode with auto-indent enabled, I have encountered two strange behaviors, which are bugs probably. I illustrate them below with short code snippets. (Bug #1) Multi-line (more then 2 lines) C comment followed by preprocessor stuff make auto-indent be still controlled by preceding comment indent. See this code snippet: /** * The spaces before the asterisks are probably probably the triggers. * In my real source there is only space so the asterisk are in a column, * but it then breaks the example on this forum). */ #include "file1" // Manually moved cursor to column 1 (ok, expected behaviour) #include "file2" // Incorrect auto-indent does not respect the previous line This has been already reported here (thread), but this example is much shorter. (Bug #2) Bad braces handling when creating block with one or more empty lines. When relying on auto-indent, 1st brace leads to indent one tab to right (correct behavior). Pressing one more enter makes new line. However the ending brace after the empty line(s) then does not result to un-indent, so the result looks like this: { } // the brace should be on column 1 (Without the empty line between the braces, the problem does not occur and the braces is correctly un-indented.) My environment: - Build: 2217 (64-bit) - OS: Windows 7 (64-bit) - user settings: { "color_scheme": "Packages/Color Scheme - Default/LAZY.tmTheme", "auto_complete": false, "auto_match_enabled": false, "trim_trailing_white_space_on_save": true, "translate_tabs_to_spaces": true, "smart_indent": false, "scroll_past_end": false, "drag_text": false, "rulers": [80, 120] } Hope these can be fixed, as both happen for me quite often due the way I work... Regards,Mity
https://forum.sublimetext.com/t/bugs-with-auto-indent/7034/1
CC-MAIN-2016-22
refinedweb
266
58.62
Win64/AMD64 API This article applies to Windows only. See also: Multiplatform Programming Guide Contents - 1 Old Information - 2 Notes on Win64 for AMD calling conventions - 2.1 About documentation - 2.2 Preliminary notes: - 2.3 Some definitions: - 2.4 Calling conventions in Win64 for AMD - 2.4.1 Parameter passing - 2.4.2 Return values - 2.4.3 Volatile and nonvolatile registers - 2.4.4 Stack configuration - 2.5 Data alignment - 2.6 Constants - 2.7 Exception - 2.8 Debugger support - 2.9 Differences between PE and PE+ exe format - 2.10 Things that must be done on FreePascal Old Information As far as I know, there is no offical document yet decribing the win64/amd64 api so the follow information is collected from several sources. Begin in this link Data type sizes Basic information at Notes on Win64 for AMD calling conventions About documentation Documentation about Windows x86-64 calling convention does exist. It can be found online on msdn: This documentation says that it's preliminary and subject to change, and is dated February 2005. However, there is a SWConventions.doc file in Microsoft Platform SDK, dated May 2005. It's a more recent version of what is published in URL above (in fact there are some important differences). In the file the only warning about "preliminary documentation" is found in "hot patchability" chapter. Since win64 has shipped in the USA I think that this documentation can be assumed as definitive. This file is located in the same directory of binaries for amd64 (cl, ml64, link and so on). I think that it cannot be redistributed (license says that one can copy it for its personal reference, but doesn't say anything about redistribution of the file). That file is the reference upon documentation in this wiki is based. Preliminary notes: rvas are still 32 bit: maximum file size for an exe file is 2 gb, so there is no need to use larger address space. Some definitions: leaf function: a function that doesn't call any function, neither allocates stack space by itself. This function doesn't have a frame pointer. frame function: a function that has a frame pointer. Calling conventions in Win64 for AMD There is only one calling convention. Actually Microsoft include files are shared among Windows 32 bit and 64 bit. Functions modifiers like stdcall, cdecl and fastcall are ignored on 64bit windows since they use an unique calling convention. Parameter passing The first 4 parameters are passed on registers. Integer parameters (from 1 to 8 bytes long) are passed in RCX, RDX, R8 and R9 registers, where the first parameter (the leftmost) is passed in RCX and the fourth in R9. Floating point parameters are passed in XMM0L, XMM1L, XMM2L and XMM3L. Parameters beyond the fourth are passed on stack. Note: If there are both integer and floating point parameters, use of n-th register is mutually exclusive; if the second parameter is a floating point number and the third is integer, the second one goes in XMM1L and the third in R8. Example: function(int1,real1,int2,real2,int3,real3) means: push real3 // passed on stack push int3 // passed on stack XMM3L=real2 R8=int2 XMM1L=real1 RCX=int1 Actually cl doesn't use real pushes to put arguments on stack, but we'll talk about it later. Parameters greater than 8 bytes are passed by reference (so it's caller duty to allocate space for a copy of the parameter and then pass a pointer to it). Some notes for vararg/unprototyped functions: if there are floating point arguments, they must be passed both in integer register and in floating point register, without conversion, so that the function can retrieve the parameter if it expected a integer parameter (in other words: if an unprototyped function expects an integer parameter as second parameter it would look in RDX. Since it's not prototyped we don't know it, and if we pass a floating point argument as second parameter we end up with the value in XMM1L and garbage in RDX: so we copy the value in both registers to avoid problems.) If the argument is beyond the fourth there is no special-handling, since callee will look at the same stack location if it expects an integer parameter or a floating point parameter. Return values Integer values which have length between 1 and 8 bytes are returned in RAX. Floating point values, __m128, __m128i or __m128d values are returned in XMM0. If return value is too big to fit in one of these registers, it's caller duty to allocate stack space before calling the function, and to pass a pointer to this location as first parameter (so there are now only three registers to store other parameters). Volatile and nonvolatile registers In addition to registers used to pass parameters, RAX, R10, R11, XMM4 and XMM5 aren't preserved by callee and should be saved by the caller if needed (R10 and R11 are used for syscall/sysret, don't know about XMM4 and XMM5). Other registers must be saved by callee if used, even if a special treatment is reserved to x87 and mmx registers. About x87 and mmx registers x87 and mmx registers aren't used: all floating points calculations are made in XMM* registers. Although x87 and mmx registers are guaranteed to be preserved across context switching, this is not true for functions calls. Microsoft says that if you use x87 registers you should consider them as volatile registers. There's nothing about mmx registers apart the context-switching thing, so I think that even mmx registers have to be considered volatile. x87 and MMX control registers x87 The x87 control word is considered nonvolatile and must be preserved by the callee. When program is started, FPCSR is set this way: FPCSR[0:6] : Exception masks all 1's (all exceptions masked) FPCSR[7] : Reserved - 0 FPCSR[8:9] : Precision Control - 10B (double precision) FPCSR[10:11] : Rounding control - 0 (round to nearest) FPCSR[12] : Infinity control - 0 (not used) If a function modifies this register it should then reset the register to the original value before returning. It should do the same thing when calling another function (however, if there is a function whose purpose is to modify this register this pass can be skipped. It can be skipped even if it's known that called function doesn't rely on that register). MMX MMX control register bits are considered volatile from 0 through 5, and nonvolatile from 6 and beyond. Same considerations made for FPCSR are valid for nonvolatile part of MXCSR. This is MCXSR setting when program starts: MXCSR[6] : Denormals are zeros - 0 MXCSR[7:12] : Exception masks all 1's (all exceptions masked) MXCSR[13:14] : Rounding control - 0 (round to nearest) MXCSR[15] : Flush to zero for masked underflow - 0 (off) Stack configuration Shadow space There is the concept of "shadow space" when calling a function. The caller must always allocate a fixed space of 32 bytes (4*8 bytes) on stack before calling a function: this is called shadow space, and it's used by callee to store arguments that are passed on registers if the callee needs these arguments to be in memory (after storing four register arguments to shadow area, all arguments appear as contiguous array on stack, making it easy to implement functions like printf()). The register parameters are also typically stored into shadow space when generating debugging code, providing the debugger a consistent location where to look for parameter values. Microsoft cl has an option to force storing register parameters to shadow space, which is enabled for debug builds. This stack space must always be allocated, even if function has less than 4 parameters. This space must be adjacent to callee return address: stack pointer must point to the end of this space before calling the function. So if more than 4 parameters are passed, parameters beyond 4 are at higher addresses, then there is shadow space on top of the stack. Stack alignment Stack pointer (RSP) must be aligned on multiples of 16 bytes when calling another function. Note: when called functions begins execution, stack isn't aligned in 16-bytes boundary: in fact it is aligned before call instruction, then call instruction places return address on stack (which is 8 bytes long) making stack unaligned. Therefore, in leaf functions the stack stays unaligned (according to the definition of leaf functions, any change to the stack pointer makes the function non-leaf). Frame pointer In most cases, Win64 API uses the fixed stack, i.e. value of the stack pointer is changed only in prolog and epilog, not in the function body. This removes the need of having a separate frame pointer. An exclusion to this rule is dynamic stack allocation using alloca() function. Also, a frame pointer can be set pointing into the middle of local variables area, so local variables are addressed at both positive and negative offsets from the FP. Since offsets values below 128 are encoded using shorter instructions, this allows generating smaller code. The frame pointer is not restricted to be RBP, it can be any non-volatile register. Parts of function: prolog, body and epilog To make work of exception handler easy, functions are considered "splitted" in three parts: prolog, body of function and epilog. Prolog is described using unwind data, and the epilog has a standard configuration so that exception handler can recognize them properly. Function prolog Prolog is the very first part of function. Generally, function prolog does this work: - Copies parameters from registers to shadow space - Pushes registers to be preserved on stack - Allocates room on stack for local variables - Sets a frame pointer (so frame pointer is set AFTER local variables!) if needed - Allocates space needed to store volatile registers that must be preserved in function calls - Allocates shadow space for called functions. This is what microsoft cl does, although this should not be seen as a restriction, since you can chose other methods. If in the body of function dynamic allocation of memory on stack is needed, no problem: memory is allocated but block stays between frame pointer and rsp-(size of space needed for volatile registers+size of shadow space for called functions), so that shadow space for called functions it's always on top of the stack. This layout might help: --------------- -- | R9 | | --------------- | | R8 | | --------------- |Shadow space for this function, allocated by caller | RDX | | --------------- | | RCX | | --------------- -- | Return address| --------------- | | | | Local variables of this function | | | | --------------- | Frame Pointer | Optional --------------- | | | | Memory used for dynamic allocation | | | | --------------- | | | | Memory used to save registers that may be modified by callee | | | | --------------- -- | R9 | | --------------- | | R8 | | --------------- |Shadow space for callee, allocated by this function | RDX | | --------------- | | RCX | | --------------- -- Code example: int func2(int i1, int i2, int i3) { int lets_waste_stack_space = 53; return (i1 + i2 + i3); } int func1(int i1, int i2) { int another_local_variable = 51; void * buf = alloca(100); return (func2(i1, i2, 3)); } int main() { int a_local_variable = 50; func1(1,2); return(0); } This is memory layout in func1: xxxxxxxx xxxxxxxx i2 = rdx i1 = rcx retaddr old rbp | local variables buf | another_local_variable <= rbp | xxxxxxxx | shadow for | room for buf xxxxxxxx | func2 | dinamically xxxxxxxx | before | allocated xxxxxxxx <= rsp1 | alloca | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx | xxxxxxxx <= buf__________________| xxxxxxxx | shadow for xxxxxxxx | func2 xxxxxxxx | xxxxxxxx <= rsp2______________________________ | Main allocates shadow space of func1: 32 bytes even if only 2 parameters are used. It puts 1 in rcx and 2 in rdx, then call instruction places return address on stack. This is what func1 does in its prolog: - Copies parameters from registers to shadow space: - Copies rdx in rsp+16 and rcx in rsp+8 - Pushes registers to be preserved on stack: - Pushes ebp. - Allocates room on stack for local variables: - Decrements rsp by 16 to reserve room for buf and another_local_variable - Sets a frame pointer: - rbp now points to rsp (that is, to another_local_variable address) - Allocates space needed to store volatile registers that must be preserved in function calls: - no space needed - Allocates shadow space for called functions: - Decrements rsp by 32. rsp position is rsp1 in previous scheme. End of prolog. In function body, alloca is called: rsp is decremented by 112 (alloca allocates memory aligned on 16-bytes boundary). rsp position is rsp2 in schema now. Buf doesn't point to same location of rsp since shadow space for callee must stay on top of the stack: instead it points 32 bytes before rsp, so that buf stays between rbp+8 and rsp-32. When func2 is called, it finds its shadow space where it should be (adjacent to return address). Note that this is modus operandi of microsoft cl, but it's perfectly legal to use other strategies (this is even remarked by microsoft). It is perfectly legal to push registers to be preserved and to allocate shadow space before a function call, and then release memory after the call. Function epilog Epilog is the last part of function. Function epilog does this work: - Release stack space - Pops registers that were saved - returns It should not do other things: function result should be set before epilog. Microsoft suggests that for functions without frame pointer stack is released adding a constant value to rsp. For function with frame pointer this can be accomplished in two ways: adding a costant to rsp or loading rsp with rbp address. This restrictions are needed so that exception handler can easily detect function epilog. This is ML64 assembly listing for func1 in previous example; Comments and exception-related stuff has been removed. You can see that cl uses a more optimized approach: instead of reserve space for local variables, set frame pointer and reserve space for shadow it reserves space in an unique operation and then sets frame pointer to the right value. PUBLIC func1 _TEXT SEGMENT func1 PROC NEAR mov DWORD PTR [rsp+16], edx ; save second parameter mov DWORD PTR [rsp+8], ecx ; save first parameter push rbp ; save old rbp value sub rsp, 48 ; room for local vars + shadow space for called function lea rbp, QWORD PTR [rsp+32] ; sets frame pointer to the end of local variables block. ; end of prolog mov DWORD PTR [rbp], 51 ; another_local_variable = 51; mov eax, 112 ; make room for buf sub rsp, rax lea rax, QWORD PTR [rsp+32] ; mov ecx, DWORD PTR [rax] ; buf now points to rsp+32. Shadow space lies between mov QWORD PTR 8[rbp], rax ; rsp+32 and rsp. mov r8d, 3 ; third parameter of func2: 3 mov edx, DWORD PTR 40[rbp] ; second parameter: i2 mov ecx, DWORD PTR 32[rbp] ; first parameter: i1 call func2 ; call to func2 ; here starts epilog lea rsp, QWORD PTR [rbp+16] ; frame function, use lea to release stack pop rbp ; pop ebp register that was saved ret 0 ; return func1 ENDP _TEXT ENDS Data alignment For performance reasons, data should be aligned to its natural alignment. Arrays should be aligned on their elements' natural alignment. Fields inside structures should be aligned on their natural alignment, and structure itself should be aligned according to alignment of its wider field (so if a record is made of a byte and a qword, record is aligned on 8-byte boundary, byte starts at record+0 and qword at record+8). Functions must be aligned on 4 byte multiples, but 16-byte alignment is encouraged for performance reasons. Constants Enums and integer constants are treated as 32-bit integers. Bitfields can't be larger than 64 bits. Exception Win64 doesn't support the old i386 way of doing exception handling using fs:(0) as a linked list of exception handlers. Instead, it uses table based exception handling. However, FPC uses it's own code to unwind exceptions so it needs a way to intercept exceptions before the table based exception handling jumps in. This can be done using vectored exception handling which is new in Windows XP: Don't get fooled by the comment at the top of the article, it only means that the code below uses fields in the context record which aren't available in win64 (eip vs. rip) so adapting this, the code works. Debugger support Differences between PE and PE+ exe format PE32+ Magic number (in optional header) is 0x20b while PE32 is 0x10b There is no BaseOfData field in Standard Fields of Optional Header Windows Nt Specific Fields (Optional Header): ImageBase, SizeOfStackReserve, SizeOfStackCommit, SizeOfHeapReserve, SizeOfHeapCommit are 8 bytes long instead of 4 Import Lookup Table entries are 64 bits long instead of 32. Bit 63 is Ordinal/Name flag, bits 62 through 0 are Ordinal Number or Hint/Name Table RVA when Ordinal/Name flag is 1 or 0 respectively Import Address Table has same structure as Import Lookup Table TLS Directory Raw Data Start VA, Raw Data End VA, Address of Index, Address of Callbacks are 8 bytes long instead of 4. Things that must be done on FreePascal First of all: I'm not a fpc developer nor I know fpc internals (I only have some ideas of what files do what thing) so I might be wrong :P It has been said that it's useless to make a win64 port before gcc has been ported, since fpc relies on gas, ld and gdb. But maybe something can start already, so that fpc is already "one step forward" when gcc arrives on win64. Calling convention First of all, a new calling convention is required. Here there are two choices: - Make a brand new calling convention (win64amd?) - Make cdecl, stdcall and safecall as "aliases" to this new calling convention on x86_64-win64 Second choice is microsoft choice: since microsoft c compiler considers these conventions all the same thing, logical consequence is that every other c compiler will use this convention in the future. Old calling conventions have no meaning in win64. First choice can be used if there is the possibility that some c compiler uses these conventions even on win64. This means that freepascal headers for windows apis should be rearranged: on i386 there should be a {$calling stdcall} while on win64 for amd there should be a {$calling win64amd} and parts where calling convention is directly specified as function modifiers should be carefully modified/moved to inc files. If second option is chosed it could be useful to temporary set this name to "win64amd" so that it can be tested and debugged on linux-x86_64, where we have gdb (making functions with win64amd modifier and calling them in pascal source files, and test and debug the calling convention). When everything is ok it will be an alias to cdecl, stdcall and safecall on win64 for amd and name will be removed. A function that is declared to use this convention should do these things: - if RCX, RDX, R8, R9, XMM0, XMM1, XMM2, XMM3 are used, their value should be saved in shadow space - other nonvolatile registers, if used, should be pushed on stack. These registers are R12 through R15, RDI, RSI, RBX, XMM6 through XMM15 - MXCSR and FPCSR, if used, should be pushed on stack. A function that call a function that uses this convention should do these things: - Save volatile registers, if used. These are RCX, RDX, R8, R9, XMM0, XMM1, XMM2, XMM3, RAX, R10, R11, XMM4, XMM5. - Save x87 and MMX registers, if used. - If MXCSR and FPCSR have been modified, their original value should be restored. - Push parameters beyond the fourth on stack and align stack on 16-byte boundary if needed. Stack must be padded before pushing parameters according to current stack alignment and number of parameter beyond fourth, so that when last parameter that must be on stack is pushed, stack is 16 byte aligned. - Put parameters 1..4 on registers. - Allocate 32 bytes on stack for shadow space of callee. Output format Okay, there is no gas for win64. Since rtl and windows api should be adjusted, a temporary way to start working on it could be to use masm output and compile with ml64. There is a masm writer, and it looks like it has been upgraded (if not upgraded, something has started to move) to handle ml64 syntax too. While this is a good thing, since it's good to have masm output option even if gas would be available, it can be seen as a useless effort. So, binary writer could be improved to handle x86_64 as well. x86_64 is i386 with larger pointers, a couple of instruction were added and some removed: it should be easy to adapt i386 binary writer so that it can write x86_64 code too. Another good thing is that this can be tested and debugged under linux for x86_64, and this platform would benefit from binary writer too. There is the PE+ coff format to implement, but it's not very different to PE coff: maybe differences will be written in this wiki too. Adapt rtl Having binary writer and calling convention, link can be performed with microsoft link until binutils are ported to win64: we could start adapting windows rtl. Of course, debugging support will still be missed, and I think that adding support for another debugger like windbg should be a big and useless effort, but while gcc isn't available for win64 freepascal could be almost-ready and win64 could be considered an experimental platform, not ready yet to be included to the official ones.
https://wiki.freepascal.org/Win64/AMD64_API
CC-MAIN-2020-45
refinedweb
3,571
56.29
Devel::System - intercept calls to system to add extra diagnostics use Devel::System; $Devel::System::dry_run = 1; # don't really do it system qw( rm -rf / ); or from the command line: perl -MDevel::System=dry_run -e'system qw( rm -rf / )' Devel::System hooks the system builtin to add diagnostic output about what system calls are being made. It's like the -x switch for /bin/sh all over again. The behaviour of the substitued system builtin can be swayed by the following package variables in the Devel::System namespace Don't actually perform the command. Always returns $return The return value to use when $dry_run is active. Defaults to 0 The filehandle to print the diagnostics to. Defaults to \*STDERR In addition there are the following import symbols that you can use to set options from the commands line. Sets $dry_run to a true value. Devel::System must be used before any other code that has a call to system in order for it to be used in preference of the built-in. This should normally be easilly arranged via the command line as shown in "SYNOPSIS" or via "PERL5OPTS" in perlrun Richard Clamp <richardc@unixbeard.net> This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~rclamp/Devel-System-0.01/lib/Devel/System.pm
CC-MAIN-2014-23
refinedweb
217
59.74
[algorithm] How to code a URL shortener? Why would you want to use a hash? You can just use a simple translation of your auto-increment value to an alphanumeric value. You can do that easily by using some base conversion. Say you character space (A-Z,a-z,0-9 etc') has 40 characters, convert the id to a base-40 number and use the characters are the digits. I want to create a URL shortener service where you can write a long URL into an input field and the service shortens the URL to "". Edit: Due to the ongoing interest in this topic, I've published an efficient solution to GitHub, with implementations for JavaScript, PHP, Python and Java. Add your solutions if you like :) Instead of " abcdef" there can be any other string with six characters containing a-z, A-Z and 0-9. That makes 56~57 billion possible strings. My approach: I have a database table with three columns: - id, integer, auto-increment - long, string, the long URL the user entered - short, string, the shortened URL (or just the six characters) I would then insert the long URL into the table. Then I would select the auto-increment value for " id" and build a hash of it. This hash should then be inserted as " short". But what sort of hash should I build? Hash algorithms like MD5 create too long strings. I don't use these algorithms, I think. A self-built algorithm will work, too. My idea: For "" I get the auto-increment id 239472. Then I do the following steps: short = ''; if divisible by 2, add "a"+the result to short if divisible by 3, add "b"+the result to short ... until I have divisors for a-z and A-Z. That could be repeated until the number isn't divisible any more. Do you think this is a good approach? Do you have a better idea? Not an answer to your question, but I wouldn't use case-sensitive shortened URLs. They are hard to remember, usually unreadable (many fonts render 1 and l, 0 and O and other characters very very similar that they are near impossible to tell the difference) and downright error prone. Try to use lower or upper case only. Also, try to have a format where you mix the numbers and characters in a predefined form. There are studies that show that people tend to remember one form better than others (think phone numbers, where the numbers are grouped in a specific form). Try something like num-char-char-num-char-char. I know this will lower the combinations, especially if you don't have upper and lower case, but it would be more usable and therefore useful. Here is a decent URL encoding function for PHP... // From private function base_encode($val, $base=62, $chars='0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ') { $str = ''; do { $i = fmod($val, $base); $str = $chars[$i] . $str; $val = ($val - $i) / $base; } while($val > 0); return $str; } For a similar project, to get a new key, I make a wrapper function around a random string generator that calls the generator until I get a string that hasn't already been used in my hashtable. This method will slow down once your name space starts to get full, but as you have said, even with only 6 characters, you have plenty of namespace to work with. Here is Node.js implementation that is likely to bit.ly. generate highly random 7 character string. using Node.js crypto to generate highly random 25 charset than random select 7 character. var crypto = require("crypto"); exports.shortURL = new function () { this.getShortURL = function () { var sURL = '', _rand = crypto.randomBytes(25).toString('hex'), _base = _rand.length; for (var i = 0; i < 7; i++) sURL += _rand.charAt(Math.floor(Math.random() * _rand.length)); return sURL; }; } alphabet = map(chr, range(97,123)+range(65,91)) + map(str,range(0,10)) def lookup(k, a=alphabet): if type(k) == int: return a[k] elif type(k) == str: return a.index(k) def encode(i, a=alphabet): '''Takes an integer and returns it in the given base with mappings for upper/lower case letters and numbers 0-9.''' try: i = int(i) except Exception: raise TypeError("Input must be an integer.") def incode(i=i, p=1, a=a): # Here to protect p. if i <= 61: return lookup(i) else: pval = pow(62,p) nval = i/pval remainder = i % pval if nval <= 61: return lookup(nval) + incode(i % pval) else: return incode(i, p+1) return incode() def decode(s, a=alphabet): '''Takes a base 62 string in our alphabet and returns it in base10.''' try: s = str(s) except Exception: raise TypeError("Input must be a string.") return sum([lookup(i) * pow(62,p) for p,i in enumerate(list(reversed(s)))])a Here's my version for whomever needs it. Implementation in Scala: class Encoder(alphabet: String) extends (Long => String) { val Base = alphabet.size override def apply(number: Long) = { def encode(current: Long): List[Int] = { if (current == 0) Nil else (current % Base).toInt :: encode(current / Base) } encode(number).reverse .map(current => alphabet.charAt(current)).mkString } } class Decoder(alphabet: String) extends (String => Long) { val Base = alphabet.size override def apply(string: String) = { def decode(current: Long, encodedPart: String): Long = { if (encodedPart.size == 0) current else decode(current * Base + alphabet.indexOf(encodedPart.head),encodedPart.tail) } decode(0,string) } } Test example with Scala test: import org.scalatest.{FlatSpec, Matchers} class DecoderAndEncoderTest extends FlatSpec with Matchers { val Alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" "A number with base 10" should "be correctly encoded into base 62 string" in { val encoder = new Encoder(Alphabet) encoder(127) should be ("cd") encoder(543513414) should be ("KWGPy") } "A base 62 string" should "be correctly decoded into a number with base 10" in { val decoder = new Decoder(Alphabet) decoder("cd") should be (127) decoder("KWGPy") should be (543513414) } } A node js and mongodb solution Since we know the format that mongodb uses to create a new ObjectId with 12 bytes. - a 4-byte value representing the seconds since the Unix epoch, - a 3-byte machine identifier, - a 2-byte process id - a 3-byte counter (in your machine), starting with a random value. Example (I choose a random sequence) a1b2c3d4e5f6g7h8i9j1k2l3 - a1b2c3d4 represents the seconds since the Unix epoch, - 4e5f6g7 represents machine identifier, - h8i9 represents process id - j1k2l3 represents the counter, starting with a random value. Since the counter will be unique if we are storing the data in the same machine we can get it with no doubts that it will be duplicate. So the short URL will be the counter and here is a code snippet assuming that your server is running properly. const mongoose = require('mongoose'); const Schema = mongoose.Schema; // create a schema const shortUrl = new Schema({ long_url: { type: String, required: true }, short_url: { type: String, required: true, unique: true }, }); const ShortUrl = mongoose.model('ShortUrl', shortUrl); //The user can request to get a short URL by providing a long URL using a form app.post('/shorten', function(req ,res){ //create a new shortUrl*/ //the submit form has an input with longURL as its name attribute. const longUrl = req.body["longURL"]; const newUrl = ShortUrl({ long_url : longUrl, short_url : "", }); const shortUrl = newUrl._id.toString().slice(-6); newUrl.short_url = shortUrl; console.log(newUrl); newUrl.save(function(err){ console.log("the new url is added"); }) }); Here is my PHP 5 class. <?php class Bijective { public $dictionary = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; public function __construct() { $this->dictionary = str_split($this->dictionary); } public function encode($i) { if ($i == 0) return $this->dictionary[0]; $result = ''; $base = count($this->dictionary); while ($i > 0) { $result[] = $this->dictionary[($i % $base)]; $i = floor($i / $base); } $result = array_reverse($result); return join("", $result); } public function decode($input) { $i = 0; $base = count($this->dictionary); $input = str_split($input); foreach($input as $char) { $pos = array_search($char, $this->dictionary); $i = $i * $base + $pos; } return $i; } } I have a variant of the problem, in that I store web pages from many different authors and need to prevent discovery of pages by guesswork. So my short URLs add a couple of extra digits to the Base-62 string for the page number. These extra digits are generated from information in the page record itself and they ensure that only 1 in 3844 URLs are valid (assuming 2-digit Base-62). You can see an outline description at. Why not just translate your id to a string? You just need a function that maps a digit between, say, 0 and 61 to a single letter (upper/lower case) or digit. Then apply this to create, say, 4-letter codes, and you've got 14.7 million URLs covered. C# version: public class UrlShortener { private static String ALPHABET = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; private static int BASE = 62; public static String encode(int num) { StringBuilder sb = new StringBuilder(); while ( num > 0 ) { sb.Append( ALPHABET[( num % BASE )] ); num /= BASE; } StringBuilder builder = new StringBuilder(); for (int i = sb.Length - 1; i >= 0; i--) { builder.Append(sb[i]); } return builder.ToString(); } public static int decode(String str) { int num = 0; for ( int i = 0, len = str.Length; i < len; i++ ) { num = num * BASE + ALPHABET.IndexOf( str[(i)] ); } return num; } } /** * <p> * Integer to character and vice-versa * </p> * */ public class TinyUrl { private final String characterMap = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; private final int charBase = characterMap.length(); public String covertToCharacter(int num){ StringBuilder sb = new StringBuilder(); while (num > 0){ sb.append(characterMap.charAt(num % charBase)); num /= charBase; } return sb.reverse().toString(); } public int covertToInteger(String str){ int num = 0; for(int i = 0 ; i< str.length(); i++) num += characterMap.indexOf(str.charAt(i)) * Math.pow(charBase , (str.length() - (i + 1))); return num; } } class TinyUrlTest{ public static void main(String[] args) { TinyUrl tinyUrl = new TinyUrl(); int num = 122312215; String url = tinyUrl.covertToCharacter(num); System.out.println("Tiny url: " + url); System.out.println("Id: " + tinyUrl.covertToInteger(url)); } }
http://code.i-harness.com/en/q/b527d
CC-MAIN-2018-51
refinedweb
1,624
55.74
Random SoftwareWe take a break from our regular conference schedule to think about software. Random Numbers and πDuring a talk being given at the conference by a friend, an example of calculation complexity was given of determining the digits of π. This got me musing about something I did several years ago, where I demonstrated the Monte Carlo technique for someone by showing how it could be used to calculate the value of π. The idea is pretty simple. Take a square with a side of length 2, and draw a circle of radius 1 inside, so it touches all 4 sides. The area of the square is therefore 4, and the area of the circle is π. This means that the ratio of the total area (the square) to the circle's area is 4:π, or 1:π/4. The Monte Carlo technique is to select random points, and determine the ratio of points inside the circle to the total number of points. This gives an approximation of the ratios of the areas. The more points you have, the better the approximation. Eventually, with an infinite number of points, the value of the ratio will approach π/4;. Picking a random point is trivial, as you just need values for x and y which are between -1 and 1. This gives a square which is centered on the origin. The point is then within the circle if: x2+y2 < 1 Actually, we can simplify a little further if we just pick the top right quadrant. This means that x and y are now between 0 and 1. The area of the quadrant of the square is 1, and the area of the quarter circle is now π/4, which is the same ratio. So I knocked up a short program to compare these ratios and see how well it converges. I did it just a couple of minutes, so rather than test for convergence, I printed out every 10th iteration so I could eyeball how it was going. This manual step was interesting, as it helped me to see what the data was doing. An important aspect of doing this is the "Randomness" of the Random Number Generator (RNG). For an approximate answer you don't need a particularly good one, but if you want the result to get better over time then you want some important properties. For instance, you want a flat distribution over the space. This means that every possible output has the same probability. You want to ensure that for any 2 observations of a sequence of n outputs from the RNG, then the probability of the n+1 being the same in both sequences is negligible. This is another way of saying that you don't want cycles. There are a lot of other properties, but these are the first 2 that come to mind. Unfortunately, the random number generators in POSIX aren't that great. They are in fact pseudo-random number generators (PRNG). The rand(3)function is actually terrible, so it was replaced with rand48(3), and eventually random(3)(and the associated srandom). However, these are all based on pseudo-random functions. This means that they are cyclical functions, with only the low order bits returned as output (otherwise the output would encode the entire state of the function, meaning you could calculate the next number in the series - making it even less random). While they look sort of random, the distribution often leaves a lot to be desired. The Linux /dev/randomdriver does its best it accumulate entropy in the system, from timing and content of keyboard/mouse actions, hard drive delays, and network packet timing, but that is usually an easily exhaustible pool of data, and so this is typically just used to "seed" one of the PRNGs. (I once used an email encryption program that just used the raw data from this device. Every so often it would lock up while waiting on "random" data. I could restart it again by wiggling the mouse, hence providing /dev/randomwith some more data.) A better alternative is an external device that generates "true" random data by measuring thermal noise, background radiation, or something similarly obscure. However, to my knowledge these are generally only used in high end security applications. The immediate thing that I noticed was that the ratio (×4) approached 3.14 very quickly, but then it started diverging again. No matter how many points were added, it would fluctuate as far out as 3.09 and 3.2. My first thought was that the accuracy of my intermediate values may have truncated some bits. So I moved the integer calculations of the random number up to 64 bits (making sure I didn't cut off any high order bits by accident), and moved the floating point calculations out to 128 bits. This is far more than is actually required, but at least I could now guarantee that I wasn't truncating anything. The resulting code isn't very good, but in my defense I was trying to pay attention to the talk that I mentioned at the beginning: Despite all this, the numbers didn't converge any better than they did when I was using fewer bits of accuracy. There may be other reasons, but I believe that it's just because the PRNG I was using is not really random enough.Despite all this, the numbers didn't converge any better than they did when I was using fewer bits of accuracy. There may be other reasons, but I believe that it's just because the PRNG I was using is not really random enough. #include <stdio.h> #define FREQ 10 #define MAX_RND 0x7FFFFFFFULL int main(void) { long all; long in = 0; long double x, y; long double pi; unsigned long long maxRndSq = MAX_RND * MAX_RND; for (all = 1; ; all++) { x = random(); y = random(); if (((x * x + y * y) / maxRndSq) < 1) in++; if (!(all % FREQ)) { pi = (long double)(4 * in) / all; printf("%lf\n", pi); } } return 0; } Thinking along these lines, it occurred to me that this might be a simple and easy test to check if a PRNG (or even a RNG) is any good. Of course, you'd automate the test to check for: a) Convergence b) Convergent value Convergence is a little awkward, since the value fluctuates around a bit before settling down. But if you check the deltas over time it wouldn't be hard. Testing for the convergent value is easy, since we already know the value for π. Speaking of which, I counted this the other day and realized that I only know 27 decimal places. It occurred to me that if I can get up to 35, then I will know the same number of places as my age. Then I only need to remember one digit every year to keep up. By the time I'm an old man it might even be kind of impressive (not 67,000 digits impressive, but impressive nonetheless). David calls me a geek. (Thanks go to Bruce Schneier for his books describing PRNGs for me all those years ago). InterviewsA little while ago I had an idea for some audio software that might clean up some of the digital distortion that radio broadcasters have to put up with when interviewing people remotely. I'd forgotten about it until I listened to the recent Talis interview with David. This interview was done over Skype, and recorded with Call Recorder for Skype, from Ecamm Network. Forgetting some of the unfortunate ambient sound at the Talis end, the recording is still of poor quality specifically because of the some of the audio degradation which occurs occasionally on the sound coming from David. This degradation is an essential compromise for a real-time conversation, but because anyone downloading the podcast is not getting the data in real time, this is not a compromise they should be forced to make. The solution that came to me is to have a Skype plugin at both ends (this should be easy to arrange for the interview). The plugin then does local recordings at both ends, and incorporates out-of-band timestamps into the audio data stream. Once the interview is over, the plugin then negotiates a file transfer from the interviewee to the interviewer. The process at the interviewer's end can now merge the two local recordings, and voilà, you have a recording with neither participant sounding like they are speaking across a digital chasm. Anyone want to build this? I can think of a dozen radio stations that need it. :-)
http://gearon.blogspot.com/2007_05_01_archive.html
CC-MAIN-2017-09
refinedweb
1,443
68.4
Closed Bug 151620 Opened 19 years ago Closed 19 years ago use non-standard line-height for table cells, even in standards mode [INLINE] Categories (Core :: Layout, defect, P2) Tracking () mozilla1.0.1 People (Reporter: emeyer, Assigned: karnaze) Details (Keywords: topembed+, Whiteboard: [adt2 RTM] [ETA 06/24]) Attachments (1 file, 4 obsolete files) The URL: field above points to an article that explains what Gecko does with images and line layout, and how that lead to layout problems in table-based designs... and others, actually; images in DIVs can trigger the same problem. This issue was originally reported in bug 22274 and has been duplicated almost 100 times as of this entry. What Gecko does in "quirks" mode is consistent with other browsers, which shrink-wrap table cells around images. In "standards" mode, spaces open up underneath such images (the article referenced above explains why). In some designs, this can literally rip a page into little pieces and fling them all over the place. Whether or not such design paradigms are sufficiently pure or not is beside the point-- there are still questions, complaints, and so forth regarding this behavior. This behavior has caused layout problems on Web sites that are using DOCTYPEs that trigger "standards" mode, and it will continue to do so with any site that adds an HTML 4 Strict DOCTYPE, or any XHTML DOCTYPE, even a transitional one. The behavior has already bitten sites like Orbitz, Match.com, and IBM, and will continue to be an problem in the future. Any site that converts to XHTML is highly likely to be hit with this problem the instant the conversion happens. To be fair, the CSS Working Group has confirmed that the CSS2 specification does call for the behavior Gecko has implemented, and the Mozilla team is to be commended for doing such a thorough job of implementing the CSS2 standard. Nonetheless, I can guarantee that this is NOT what authors want, and no other browser engine does what Gecko does for precisely that reason. Again, consider the number of duplications of bug 22274. Perusal of 22274 also shows that there are some very strong opinions on what should be done. In order to sidestep this issue, I am requesting the implementation of an internal preference that can be used to set which line-layout behavior is used in the rendering modes. This would let Mozilla retain its current behavior while allowing non-Mozilla Gecko embedding clients (Chimera, Netscape, Galeon, Compuserve, any others) decide whether they want the strict-CSS2 line layout behavior or not when in "standards" mode. This would almost certainly be the simplest, safest, and least controversial way to address this problem. I want to be clear that this is not a proposal to change the quirks/standards switching mechanism. It is simply an internal preference-- one that should never be exposed in the UI-- that says, "when in standards mode, do line layout this way or that way." Nothing more. I am given to understand that this could be done with a very small patch. The sooner this can be done, the better. I have set the severity to Major because I don't think I can justify calling it critical or blocker-- this isn't a crash situation, or a security problem, or anything like that. But I do believe that this is an important change to make, especially if Gecko wants to be taken seriously as a desirable embedding candidate into the future. Taking the bug. Assignee: attinasi → karnaze Priority: -- → P2 Target Milestone: --- → mozilla1.0.1 Status: NEW → ASSIGNED The patch only fixes the cases involving table cells and doesn't use a pref. I think some of us agreed that a pref wasn't necessary in this case, but if I'm wrong, I can add one. Comment on attachment 87598 [details] [diff] [review] patch to fix the bug >+PRBool >+nsBlockFrame::IsTableCellBlock(nsIFrame& aFrame) >+{ >+ nsIFrame* parent; >+ aFrame.GetParent(&parent); >+ if (parent) { >+ nsCOMPtr<nsIAtom> frameType; >+ parent->GetFrameType(getter_AddRefs(frameType)); >+ if (nsLayoutAtoms::tableCellFrame == frameType) { Any chance you could check here whether the content node is a td here? That would mean that this change wouldn't apply to non-XHTML XML. >+ return PR_TRUE; >+ } >+ } >+ return PR_FALSE; >+} Making this topembed, seeing this a lot these days. As for an actual pref, drivers might want one, not sure what mozilla.org's stance is on this. dbaron, if a check is made for a <td>, it won't handle the html cases where elements have their display types changed to table-cell and the cases involving anonymous table cells (e.g. putting an image inside a row). If you think it is necessary to exclude xml, maybe it would be better to check the default namespace and exclude xml. CSS3 provides a property to control this. Let's implement that instead of adding yet more complications to our inline box model (it gets limited enough testing as it is). I don't understand why any Mozilla embedder would want to break the standards in standards mode. Doing so is irresponsible. It would be even more irresponsible for us to consider letting embedders do that. > dbaron, if a check is made for a <td>, it won't handle the html cases where > elements have their display types changed to table-cell and the cases involving > anonymous table cells I would definitely want to exclude such cases. ian, dbaron indicated that it is not trival to implement the property you are suggesting (by the way what is the property called?), hence the current approach, which only affects table cells. See also the related bug 22274. I don't think the patch adds much additional complexity to the block/inline code base. QA Contact: petersen → ian The patch makes our standards mode non-compliant. As such, it shouldn't be used. Pages that follow the specs should be rendered per the specs. If we don't keep our standards compliant behaviour, there is zero chance that any other browser will ever fix their behaviour. As we have already seen with many of our standards mode vs quirks mode differences, it is only after we insist on following the specs that other browser fix themselves (e.g. CSS parsing; the box model). Incidentally, I'm not overly concerned about Mozilla here. It's the behaviour of Netscape's releases, and the releases of Netscape's embeding partners, which concerns me the most, as they are the ones that will get the most users. This is why a pref is pointless; just because three of Mozilla's users happen to be doing the right thing doesn't help the Web at large. ian, the patch doesn't have a pref. Maybe the summary should be changed, but I'm not going to do it. Attachment #87598 - Attachment is obsolete: true Attachment #87691 - Attachment is obsolete: true Contrary to the way emeyer filed the bug, we're not going to implement a preference here: we're just gonna do it across the board (if we do it at all). dbaron had asked that emeyer and the other evangelists clarify whether this was an issue that was confusing content authors who are creating _new pages_, or if it was an issue that was affecting a large number of _extant pages_. If the latter, then this change is more palatable to me. Hixie: If CSS3 is going to propose a property to explain IE and Opera behavior post hoc, then I'm having trouble understanding your objection here. At the point in time when the CSS3 property becomes well-defined, we'll remove this code, and implement that code. I suppose at that moment we'll change from "CSS2 non-compiliance" to "CSS3 compliance", but who really cares? Summary: RFE: Create internal preference to fix layout-shattering behavior → use non-standard line-height for table cells, even in standards mode Comment on attachment 87692 [details] [diff] [review] revised patch with dbaron's suggestion sr=waterson, contingent on dbaron as r=. Attachment #87692 - Flags: superreview+ Comment on attachment 87692 [details] [diff] [review] revised patch with dbaron's suggestion r=dbaron, although I'd rather see the |aMustHaveTdContent| argument removed (and, if you want, the function renamed to |IsTDTableCellBlock|). There's only one set of callers, so why bother with two ways of calling it (and the extra cost associated with passing and checking the argument). Attachment #87692 - Flags: review+ Well, my proposal was (and is) to make this an internal preference, so that embedding clients can choose what they want to do-- follow the standards, or do what authors want and every other rendering engine I've ever seen does. I have no interest in making all Gecko-based clients do the non-standards thing, but I also have no interest in Gecko continuing to shatter page layouts. If the Mozilla team can implement the CSS3 property 'line-box-contain' () to fix this behavior as fast as the client could be patched to let embedding clients to what they want, then I'm all for it. Then embedding clients could set up the UA styles to either do things the way Gecko does now, or the way every other browser does now. It would be almost like having an internal preference to set. The reason I proposed taking the latter approach at this time is that I've been repeatedly told that 'line-box-contain' won't be coming any time soon, and I think this problem needs to be addressed soon. Chris (comment 14): I still think an internal prefernce makes the most sense, because it will very closely mimic the situation when 'line-box-contain' is implemented. At that time, clients can set up their UA styles to do what they please. This was the whole idea behind creating the preference. However, to address your other point about new pages versus old pages: it's both. Every week or two, I see someone asking the question "Why does (Mozilla|NS6.x) break my page?" on an authoring mailing list or newsgroup. In some cases, it's someone creating a new design. In other cases, it's an author who finally downloaded a Gecko-based client, loaded up their Web site, and saw everything fall apart. We have also seen cases with major and minor corporate sites where they have taken an old design, rewritten it in XHTML, and put an XHTML Transitional DOCTYPE on the page. Even thought it's transitional markup, the XHTML bit puts them in standards mode, and their design falls apart. As I mentioned in my lead comment, IBM, Match.com, and Orbitz got hit by this, and I have no doubt at all that there are other sites out there that we just haven't found yet. This could become an especially difficult problem in Asia, where table-based graphic-heavy sites are apparently rather common. As more and more sites convert to XHTML (because it's the Right Thing To Do, after all), more and more designs will shatter. Whether or not a "large number" of sites are affected by this depends on what you consider to be "large." The problem as I see it is that a noticeable number of sites have been bitten by this problem, and that number will only grow over time. For an example of a site which exhibits this problem see. They use custom DOCTYPEs for internal validation purposes which will always force them into Standards mode. Currently there is a minor layout foo in the masthead which appears in Mozilla and Gecko embedded browsers that does not appear in IE or NN4. As an exercise please tell me how you would evangelize IBM on this. In particular I would be very interested in CSS rule(s) that I could give them that will fix the layout in their masthead and not break the layout of other images outside of the masthead on *any* of their pages without requiring that they QA/redesign their entire site. > dbaron had asked that emeyer and the other evangelists clarify > whether this was an issue that was confusing content authors who are > creating _new pages_, or if it was an issue that was affecting a > large number of _extant pages_. Extant pages should be using quirks mode. We've already made *massive* sacrifices for backwards compatability reasons in quirks mode. If there exist pages that are triggering standards mode but aren't following the specs, then it is _trivial_ for them to remove the DOCTYPE. Doing that really is easy. Standards mode is supposed to be the holy ground in which we do the right thing for authors who are themselves doing the right thing. Just because a bunch of incompetent web designers can't fix their sites doesn't mean we should have to go out of our way to break all the standards compliant ones. > Hixie: If CSS3 is going to propose a property to explain IE and > Opera behavior post hoc, The IE and Opera behaviour couldn't be explained using the property, since you can't change their behaviour, which is the whole point of the property. (It would be like saying that some of the table quirks can be explained by CSS. Microsoft tried using that argument, and Eric, David and I correctly pointed out that it was flawed.) > then I'm having trouble understanding your objection here. At the > point in time when the CSS3 property becomes well-defined, we'll > remove this code, and implement that code. We already know enough about what the property would be to implement it _now_ as a -moz-* property. So by your logic, that is what we should do, rather than implement a non-compliant hack. Since Eric has been told that 'line-box-contain' won't be coming any time soon, I have to question the likelyhood that "when the CSS3 property becomes well-defined, we'll remove this code, and implement that code". > I suppose at that moment we'll change from "CSS2 non-compliance" to > "CSS3 compliance", but who really cares? Me? And hopefully, all of you? > do what authors want and every other rendering engine I've ever seen > does. That is the mandate of quirks mode, not standards mode. Why should we even bother trying to follow the specs at all? Compliance isn't a buffet meal where you pick what you want to support. It's an agreement between browser implements and content providers to follow a set of rules. I am frankly ashamed that any of you are considering changing this in standards mode. You all know why standards compliance is so important. You all know what leverage our pedancy has against Microsoft. Considering that CSS2 was specifically and repeatedly clarified to describe the behaviour we implement, changing our behaviour would be a huge step backwards. We currently have an almost perfect inline box model. Changing it would not only be bad for authors who are following the spec (or trying to learn the specs using Gecko-based browsers as a guide), but it would be embarassing for us all. Now, I'm open to some compromises. How about these: * We could trigger quirks mode for XHTML Transitional documents sent as text/html. * We could trigger quirks mode for IBM's custom DOCTYPE. I basically agree with Ian, except for his proposed solutions (though perhaps that's just because I'm tired of fighting about this issue, and about standards in general). However, with the Netscape evangelism team saying that this bug is costing them too much time, I don't see much chance that Netscape will want to ship releases with this implemented correctly in the future, and I really don't want to fork the layout engine between Mozilla and Netscape (and the previously proposed change in Bugscape was a much broader change, since it wasn't limited to table cells) -- that would be incredibly confusing for web developers as well as even more damaging to our standards compliance than a smaller compromise. I'd really rather not push back further on our DOCTYPE boundary between standards mode and quirks mode (IBMs custom DOCTYPE is OK, but I'd rather not push back on XHTML transitional), since it is good to have real pages using standards mode (both for testing that we're really correct (recall the standards mode form controls) and because of the influence it has on pages). Bug 22274 is the main issue that keeps us from moving the standards/quirks boundary in the other direction. I think making this change only for table cells that are TD elements is a somewhat reasonable compromise -- after all, the working group really doesn't want this to be the observed behavior, and is trying to find solutions that allow it not to be. In theory, such a change shouldn't matter for pages that are *really* following the standards since they won't be using tables-as-layout. eric, based on dbaron's comment #21 regarding tables-as-layout, do you still feel strongly about having a pref? Because I think most of the developers feel strongly about not having a pref. >We already know enough about what the property would be to implement >it _now_ as a -moz-* property. So by your logic, that is what we >should do, rather than implement a non-compliant hack. Great! How long will it take to have a patch show up in this bug? Because if it can be properly implemented (as a -moz-*, that's fine with me) in short order, I'd be all for that-- it would have the effect of a preference switch, as I pointed out before. >Why should we even bother trying to follow the specs at all? >Compliance isn't a buffet meal where you pick what you want to >support. It's an agreement between browser implements and content >providers to follow a set of rules. Yeah, that argument was used in the style-attribute specificity case (bug 62150), and Mozilla never did follow the standards on that one. It does now by errata'ed fiat, but there was a long period where we willfully ignored CSS2 because it suited us to do so, and because "no other user agent follows the specification." I fail to see how this would be greatly different, at least as I originally proposed tackling this problem. Doing it via CSS or a CSS-like -moz-* construct would suit me as well-- if it can be done quickly. > We could trigger quirks mode for XHTML Transitional documents sent as > text/html. I know that many authors would consider that the right thing to do, because some of them have already been confused by our triggering "standards" mode on a "transitional" DOCTYPE. But I don't know that it's necessarily a good idea. > We could trigger quirks mode for IBM's custom DOCTYPE. I don't see how that would help in the general case, unless you'd like to have ANY custom DOCTYPE trigger "quirks" mode. Because odds are we'll see more custom DTDs cropping up over time, and many of them are likely to run into this problem. Re: comment 22: I still think that a pref would be the best compromise at this time, unless we can come up with an implementation of 'line-box-policy' or something like it. That would, as an example, let Mozilla do what it does now, and Netscape do what other commercial products do. Thus authors can have a client that does strict standards layout to test with, but not have to create workarounds for inconsistent layout behaviors between commercial clients. It also lets every embedding client make the choice: are we more interested in standards compliance or layout consistency? And as I say, either approach (pref or CSS extension) would serve that as well as the other for setting up UA defaults. 'line-box-policy' would offer more flexibililty than a simple pref, but I'm greatly concerned about implementation time. I'd also like to take a moment to note that this behavior does not only bite developers when working with tables. I recently answered a question on this behvaior, where a design was split between two DIVs and the DOCTYPE triggered standards mode. They couldn't figure out why a space opened up between the image in the first DIV and the border of the second. I was able to help them fix it easily enough, but it's an understandable reaction. I'm pointing this out because while tables are the vast majority of problem cases, they aren't the only ones. And also to highlight that if we implemented 'line-box-contain' or an equivalent, authors could choose what to do for themselves instead of depending on an internal pref. As long as it can be done quickly, it seems a better solution. But it may be better, from my perspective, to create an internal pref for the short term, and then rip it out when 'line-box-policy' is implemented a little later down the road. I would be more accepting of the "if we give on this point nobody will ever follow suit" argument if I thought anybody ever would. But in all honesty I believe that every other browser will just wait until CSS3 is published, and then claim to follow certain internal 'line-box-policy' values. That's what they did with style-attriubte specificity-- just waited until the specification was finally evolved enough to match their behavior. I see no reason why they would behave any differently in this case, especially given the more dramatic nature of the difference between the two layout behaviors. I did some research. Of _all_ the dupes of bug 22274, the following still exist (all are sent as text/html): URI DOCTYPE NOTES 4.0 strict shows problem 4.01 tran w/uri shows problem 4.01 tran w/uri shows problem 4.01 tran w/uri shows problem 4.01 tran w/uri shows problem 4.01 tran w/uri shows problem[1]... lowercase 4.01 shows problem[2] xhtml transition shows problem xhtml transition shows problem xhtml transition shows problem xhtml transition shows problem xhtml transition shows problem xhtml transition shows problem xhtml transition shows problem xhtml transition shows problem none site has adapted 4 tran w/uri site has adapted 4 tran w/uri site has adapted 4.01 strict site has adapted xhtml strict site has adapted xhtml strict site has adapted xhtml transition site has adapted xhtml transition site has adapted xhtml transition site has adapted xhtml transition site has adapted xhtml transition site has adapted xhtml transition site has adapted none worksforme none worksforme none worksforme attachment 60916 [details] none worksforme none worksforme none worksforme none worksforme 4.0 tran no uri worksforme 4.01 tran no uri worksforme 4.01 tran no uri worksforme... 4.01 tran no uri worksforme 4.01 tran w/uri worksforme 4.01 tran w/uri worksforme [3] xhtml strict worksforme attachment 50102 [details] none testcase 4 strict no uri testcase attachment 3171 [details] 4 strict no uri testcase 4 strict no uri testcase attachment 21919 [details] 4 tran w/uri testcase 4 tran no uri testcase 4 tran w/uri testcase.... 4 tran w/uri testcase 4.01 strict testcase attachment 46663 [details] 4.01 tran w/uri testcase... 4.01 tran w/uri testcase 4.01 tran w/uri testcase attachment 75360 [details] 4.01 tran w/uri testcase xhtml 1.1 strict testcase attachment 14083 [details] xhtml transition testcase attachment 32492 [details] xhtml transition testcase xhtml transition testcase [1] Site has partly adapted but still shows problems [2] <!doctype html public "-//w3c//dtd html 4.01 transitional//en"> [3] I'm not entirely sure why this site works. What we learn from this is that: of the 70 sites, a good dozen (those marked worksforme) have presumably switched to a quirks-mode DOCTYPE, another dozen have fixed their site, and of the dozen or so of the remainder real sites that still show the problem, more than half are using an XHTML Transitional DOCTYPE sent as text/html. OUR BEHAVIOUR IS HAVING A DIRECTLY MEASURABLE EFFECT. I cannot emphasise that enough. That alone should be a reason to not layout change our behaviour here. If we do want to do something, though, I think we should definitely consider changing our quirks mode DOCTYPE detection code rather than adding a weird quirk in standard mode. > Yeah, that argument was used in the style-attribute specificity case > (bug 62150), and Mozilla never did follow the standards on that one. Throughout the entire time that bug was being discussed in a Mozilla context, we were actively campaigning to have the spec changed. In the case of _this_ bug, the spec has been clarified several times, and it is clear that what we are doing is correct. >> We could trigger quirks mode for XHTML Transitional documents sent >> as text/html. > > I know that many authors would consider that the right thing to do, > because some of them have already been confused by our triggering > "standards" mode on a "transitional" DOCTYPE. I had no idea the proliferation of XHTML Transitional sent as text/html was as widespread as my recent examination of bug 22274 shows it is. Given this new information, I would now be quite happy to trigger quirks mode from that transitional DOCTYPE. (When it is sent as text/html.) >> We could trigger quirks mode for IBM's custom DOCTYPE. > > I don't see how that would help in the general case It doesn't, it's just a cheap way of fixing a high profile site. Given the low cost, low risk and high potential gain (it's a whole site off our list) I think we should go for it. > That would, as an example, let Mozilla do what it does now, and > Netscape do what other commercial products do. As I mentioned earlier, I am not worried about Mozilla's behaviour. Mozilla has about three users. Mozilla's behaviour will have no lasting effect on the Web at large. It's Netscape's behaviour that I care about. (Along with IE's, and AOL's, and other browsers with measurable market share.) > Thus authors can have a client that does strict standards layout to > test with, but not have to create workarounds for inconsistent > layout behaviors between commercial clients. Authors already have a good way of doing that; remove the DOCTYPE. > It also lets every embedding client make the choice: are we more > interested in standards compliance or layout consistency? As you know, I don't _want_ them making that decision. :-) > I would be more accepting of the "if we give on this point nobody > will ever follow suit" argument if I thought anybody ever would. So the way that WinIE is following us on CSS parsing, text-decoration, the box model, etc, is not enough? > But in all honesty I believe that every other browser will just wait > until CSS3 is published, and then claim to follow certain internal > 'line-box-policy' values. The CSS WG firmly established that this would not be satisfactory for exiting CR, and since exiting CR is something all browser makers now care about, I have a lot more faith that we'll eventually see interoperable implementations of this. Summary: use non-standard line-height for table cells, even in standards mode → use non-standard line-height for table cells, even in standards mode [INLINE] Comment on attachment 88159 [details] [diff] [review] alternate patch to (always) turn off strict mode line height via a pref I'm against this, as I said in comment 21. Attachment #88159 - Flags: needs-work+ I thought we agreed "no pref"? Do, or don't do. My thoughts:. OTOH, emeyer makes the claim that this quaint feature of CSS2 is too expensive (in terms of time, effort, or money, I'm not sure which) to evangelize. He ought to be able to back that up. So, why doesn't someone @netscape.com organize a conference call between emeyer and Hixie, and anyone else that would like to be involved in this debate, or needs to be able to deal with the technical fallout of any decision that is reached. The layout developers agreed to not have a pref. The patch is in response to comment #24. There are now 2 patches. I'm not going to decide which patch to use, although it sounds like it will be very difficult to get sr and r for the 2nd patch. I'm not going to check the 1st patch in until emeyer agrees to it. >. I volunteer to help anyone remove their DOCTYPEs, all they need do is give me access to their site and I'll do it for them. I also volunteer to help with the evangelisation work that this feature is requiring. Furthermore, I'd like to point out that I have proposed several alternative solutions that do not violate the standards. To wit: 1. Implement -moz-line-stacking-strategy per 2. Add the following DOCTYPEs to our list of DOCTYPEs that trigger quirks mode: HTML 4.01 Transitional with URI XHTML 1.0 Transitional sent as text/html <!doctype html public "-//w3c//dtd html 4.01 transitional//en"> <!DOCTYPE html SYSTEM ""> Of these two solutions I prefer the second. It has the least risk and is by far the easiest. See also bug 146125. Your offer to help sites is very generous yet not realistic. This is a chicken and egg situation where making changes driven by Gecko are not a priority for many sites. The browser will not be embedded if too many sites are messed up. As you can see Match.com is still broken, for months now, despite their being a major partner and site which has been aware of this problem. From : XHTML 1.0 Transitional [...] The idea is to take advantage of XHTML features including style sheets but nonetheless to make small adjustments to your mark-up for the benefit of those viewing your pages with older browsers [...] So I'm all for Hixie's suggestion to make all transitional doctypes trigger the quirks mode. That seems to be in agreement with the specs. Please, stick to the standards, it's *the* thing mozilla is extremely good at. About comment #33: Hixies suggestion solves the majority of the cases. Especially the xhtml cases, there people want a doctype and they often are to lazy to correct their markup. Transitional would be a godsend for them. Please, at least _try_ it with the transitional doctypes triggering quirks before you try to apply this patch/bug to mozilla. You'll see, it'll almost all of the sites work. While no request has been made for 1.0 branch approval yet, the 1.0 drivers (myself and Jud Valeski in particular) feel that Karnaze's patch is an appropriate compromise on a tough issue with no obvious "right" answer. There's a "right" answer from a standards point-of-view, and from a compatibility (and vendor/embeddor) point-of-view, but they conflict. We should move forward with this on trunk as well. I can't see what problem the 2nd solution in comment #31 would cause. It's a win-win situation. We would still be standards compliant and vendors would be saved from 95% of the current evangelism bug reports. Re comment 37 (and the second solution in comment 31): The problem with adding to our list of quirks doctypes is that we actually want standards mode to be used, for a number of reasons. First, standards mode doesn't get that much testing, and we don't want it to have lots of nasty bugs that will show up in a few years once more real pages start triggering it. (Consider our standards mode form controls, where, for a long time, we emulated Nav 4.x's form control behavior in quirks mode and IE's in standards mode, and the standards mode controls were severely broken until we switched to them by default and discovered all the bugs.) Second, the point of standards mode is that we want our standards support to influence new pages that are being written today, but we want the compatible behavior for older pages. Thus the current best-practice DOCTYPEs should all be triggering standards mode, not quirks mode. The reality of the web is that authors don't understand the standards (and often don't even use validators, which only test a small subset of the requirements of the standards), they just test against browsers. If they're testing against Mozilla running in standards mode, they're much more likely to write good pages than if they're testing against Mozilla running in quirks mode. Most web authors see "transitional" as beeing the logical equivalent for using "old" standards. A lot of comments say "I'm using transitional, but mozilla still renders it in standards mode". That's what's confusing web authors. That's what's confusing match.com. There are now more and more sites using "stricts", but it'll take it's time. We simply can't say we make standards mode non standards compliant to have more people testing the standards mode, that's inherently wrong. > ------- Additional Comment #33 From Susie Wyshak 2002-06-19 11:23 ------- > Your offer to help sites is very generous yet not realistic. I was merely pointing out that Waterson's statement (that I was able to claim that we should follow the spec because I did not have to worry about the consequences) is incorrect. I *am* willing to do the work required. Just point the way. > ------- Additional Comment #38 From David Baron 2002-06-19 13:36 ------- > The problem with adding to our list of quirks doctypes is that we > actually want standards mode to be used, for a number of reasons. There is no point _having_ a standards mode if it doesn't follow the standards. > Second, the point of standards mode is that we want our standards > support to influence new pages that are being written today And it's working -- just look at how many sites we've affected *with this issue alone*. (See comment 25 -- over 20% of sites listed have actually gone and fixed themselves because of us.) > The reality of the web is that authors don't understand the > standards, they just test against browsers. Which is why our standards mode should be as close to the standards as humanly possible. > If they're testing against Mozilla running in standards mode, > they're much more likely to write good pages than if they're testing > against Mozilla running in quirks mode. Of course. But the reality of the situation is that we *now* have a whole bunch of *existing* pages that trigger our standards mode but expect to get rendered in a backwards-compatible way. Furthermore, all the DOCTYPEs I listed are *transitional* DOCTYPEs. The whole point of the *transitional* HTML modes is that they be renderered in a backwards-compatible way. And what Arthur said. Re comment 39: There's a big difference between the continued ability to use old markup and the ability to have correct standards support. There's been strong support (see bug 42525) for a way to trigger standards mode while still using some old markup in valid documents. Also note that IE's standards mode is triggered by *any* doctype that contains a system identifier in addition to the public identifier -- our conditions are much stricter than IE's. Jud and I were referring to (suggesting) the "no-pref" patch that's r/sr'd If we allow agents to behave different from a stock Mozilla, Mozilla-Users will get more bad looking pages and maybe some embed agents with the pref set will get a deny - so the pref is no option imho. About Quirkt for XHTML 1.0 Transitional: I know many pages which only use Transitional because they need some special attributes, so you won't have a way to get cellpadding="" legal _and_ be in Standards mode. The others (non-official) DOCTYPEs (lovercased 4.01 and the internal IBM one have no W3C Standard behind them, so there's no standard we would break. so there are 2 real options: 1. get a stricter handling which DOCTYPE should go into Standards, which into Sloppy mode and send _all_ non 100% valid W3C DOCTYPEs (in addition to the current list) to Sloppy mode. 2. always use the non-standard line-height, which no webmaster really understands (I see at least 3 "Netscape sucks" with this problem in German SelfHTML forum - don't know, how many just kick Netscape 6/7/Mozilla without asking why it does this). dbaron: Our compliance is also a lot stronger than IE's. If there really are authors who want to trigger standards mode while using the transitional DTD, then they can trigger our standards mode by using an empty internal subset. Authors who are not concerned with legacy rendering behaviour really should just be using strict DTDs and stylesheets. Attachment #87692 - Attachment is obsolete: true Attachment #88159 - Attachment is obsolete: true Attachment #88365 - Flags: superreview+ Attachment #88365 - Flags: review+ The patch (always use quirk line-height inside a TD) is in the trunk. Status: ASSIGNED → RESOLVED Closed: 19 years ago Resolution: --- → FIXED Whiteboard: [FIXED_ON_TRUNK] Re comment 46: We want our standards compliance to affect more than just those who know and care about it. So we want this to apply to <td> elements in the XHTML namespace in arbitrary XML content? (I think we do, but...) As a secondary point, the + nsCOMPtr<nsIDOMHTMLTableCellElement> cellContent(do_QueryInterface(content)); + if (cellContent) { is pretty slow when |content| does not implement nsIDOMHTMLTableCellElement (for example, any time "display:table-cell" is applied to random XML). If this happens often, it would be much better to do: if (content->IsContentOfType(nsIContent::eHTML)) { nsCOMPtr<nsIAtom> tag; content->GetTag(*getter_AddRefs(tag)); if (tag == nsHTMLAtoms::td) { return PR_TRUE; } } Re Comment #49: What standards compliance? This bug is specifically about removing one of the corner stones of our efforst to implement the specs. VERIFIED FIXED on the trunk CVS tip. This caused regression bug 152959. Status: RESOLVED → VERIFIED Continuing the discussion from bug 22274 comment 76... Well I know its been a year, but the thing that I was thinking about proposing back then seems to be half implemented with this patch. In order to also cover comment 24, would it break anything (and how hard would it be to implement) to only use "strict" line-heights on inline items if there were #text nodes present in the containing block? Otherwise it would use the quirks-type calculation. This way if you had text, you'd presumably want the line-height to match that and use strict rendering, but if you did not have text, then you would not gain the seemingly magical (to anybody not familiar with the issues) line-height and it would work like it has in the past. You'd get the best of both worlds unless there are other things that would break. The benefits over bug 22274's resolution would be that most (if not all) pages should work (and it would cover <div>s too), and the benefit over this bug's resolution would be that we could still get the strict rendering in <td> elements. If this approach is feasible, I'll open another bug for it. Personally I'd like to see the Transitional DTD's moved to Quirks mode too. I should note about my proposal that if line-height was explicitly applied, it would be honored instead of following the above formula. (So we don't get another bug like 152959). It would only be the inherited line-height's that would go through this filter. Before this degenerates into irrelevant commentary, I'd like to slightly rephrase my comment. Unless we have a really good reason to use the slow QI method for that test, we should switch to the one I propose. For cases when the node is not an HTML table cell it's at least 20 times faster, last I checked.... Er, shouldn't this have the keyword for the release notes ? When will be fixed ? Boris, since your comments came after the patch was checked in, would you mind developing the patch and going through the approval and testing? I'm at a loss as to why this risky and controversial patch was checked in, reducing our standards compliance instead of the equally effective and much safer fix of making transitional DOCTYPE trigger our transitional mode. (The only reason that has been put forward is that it reduces the number of pages that are exposed to our standard mode handling, which seems a little silly considering the patch which has gone in.) No, it's not silly. The differences between our standards mode and quirks mode are quite large, and this fix does something that the CSS working group is busy adding a CSS property just so that it can be made the default. Wait, no, I have a better idea. We can just use quirks mode for all pages, and our strict mode will still be correct, and we can claim standards compliance, but it won't matter at all. Filed bug 152979, attached patch. Reviews are quite welcome (though I suspect no one really cares and it'll just be a waste of my time to have bothered). Actually, this patch is quite far from the behaviour the WG is discussing. I agree that it is unfortunate that authors are apparently assuming that "transitional" means "legacy rendering". But surely rendering pages that are written with this assumption in quirks mode is better than rendering _all_ documents incorrectly. Transitional documents are largely written with presentational HTML in mind, and CSS doesn't fit well with that kind of document. Quirks mode does. Strict documents, on the other hand, work very well with CSS. We should be encouraging authors to drop transitional altogether and write web pages that are structurally sound. Maybe we need three layout modes, as we had parsing modes at one point: Quirks, for HTML 3.2 and other legacy content, Transitional, which only has the few quirks that are needed for most validating but naively authored documents (i.e. doesn't have things like parsing unitless numbers as pixels, but does have the patch from this bug), and Standard, where we do what the spec says. Of course then we'd have even more to test and even fewer people testing each one, so that's not a perfect solution either. While I know your "better idea" was rhetorical, it is not analogous to what I am suggesting in comment 31, because there is no evidence that a large number of Strict DTD documents are failing because of this issue. There is ample evidence that the four DOCTYPEs I listed are used by sites affected by this "bug". > encouraging authors to drop transitional altogether This is unfortunately impossible while NS4 has any sort of marketshare (and at the moment it still holds far more than we do). The current setup allows a content author to use presentational attributes (which are not in strict HTML) so that legacy browsers will have about the right color, alignment, background while at the same time using stylesheets in the modern browsers and not having to worry about rendering quirks due to "backwards compatibility" in said modern browsers.... For the record, since I was saying it somewhere else: The determination as to whether to go into standards mode would ideally be something like that we should go into standards mode for all pages written more than a few months after Mozilla became a well-known user-agent on the web (say, beginning of 2001). Our doctype detection is merely the best approximation for that we could come up with. Re comment 62: Perhaps a three-modes solution would be a good idea. The middle mode would only be noncompliant for this bug, for loading stylesheets that aren't 'text/css', and a few other things. Then we could drop the bar a bit and use the middle mode for more pages. I'd be willing to consider that, and perhaps even implement it. You could file a bug on me if you want. Opened bug 153032 for implementing an additional rendering mode. I'd prefer that compared to the situation now where we have a quirks and a non-standards mode. bug 153035 added for my proposal bug 153039 added for DOCTYPE change proposal Resolve as you will. I feel that either or both of these should be done instead of what this bug did. I propose we back out the patch checked in for this bug and instead work on the proposal in bug 153032. Keywords: adt1.0.1, mozilla1.0.1, nsbeta1+ Whiteboard: [FIXED_ON_TRUNK] → [adt2 RTM] [FIXED_ON_TRUNK] [ETA 06/24] Adding adt1.0.1+ on behalf of the adt for checkin to the 1.0 branch. Please get drivers approval before checking in. When you check this into the branch, please change the mozilla1.0.1+ keyword to fixed1.0.1 Sadly, I have to reopen this bug. I have verified with some simple tests that the patch is indeed operating in cases where images are all alone in a TD. However, it would seem that if an image is wrapped in a hyperlink (as is the case on match.com) then the patch doesn't kick in-- I presume due to the presence of the anchor's node. Since most images in tables are links, the patch as it stands is nowhere near sufficient to address the problem. I apologize for not catching this before the patch went in; I should have thought the implications through more clearly ahead of time. In addition, consider: <td> <img src=blah.gif> </td> I presume the spaces to either side of the IMG prevent the patch from working, which would drastically limit its usefulness as well. We need something that can detect cases where a TD contains images and optionally whitespace (but no visible characters) and does the quirks-mode layout. The current patch isn't that inclusive. If that's somehow too risky, we could consider adopting the approach outlined in bug 153032, or something like it. Of course, it might carry its own risks... Status: VERIFIED → REOPENED Resolution: FIXED → --- removing adt1.0.1+, as this has been reopned. we'd really like this one on 1.0.1, so pls, pls come with a new fix soon. What you are asking for is basically the fully blown quirks inline box model, it would be very hard to implement it only for images (think :before, broken images showing alternate text, etc). I strongly recommend we back out the patch that was checked in and instead implement the suggestion gives in bug 153032. I (personally) would be happy with a solution as spelled out in bug 153032. Bug 153032 now has a new fix. I'm going to try and get reviews tomorrow. The fix for bug 153032 has landed on the trunk. Should some of the keywords from this bug go there? I'm not sure what the status of this bug should be, but it shouldn't be left open. Status: REOPENED → RESOLVED Closed: 19 years ago → 19 years ago Resolution: --- → WONTFIX This bug has a WONTFIX resolution but the status whiteboard says FIXED_ON_TRUNK. What does that mean? The fix was backed out of the trunk as part of the landing of bug 153032. Whiteboard: [adt2 RTM] [FIXED_ON_TRUNK] [ETA 06/24] → [adt2 RTM] [ETA 06/24] VERIFIED Status: RESOLVED → VERIFIED
https://bugzilla.mozilla.org/show_bug.cgi?id=151620
CC-MAIN-2021-25
refinedweb
8,005
69.62
Python Turtles are a great way to start kids in programming. Turtles offer a simple step-by-step graphical presentation that has tons of tutorials and examples. Turtles can also be used on Raspberry Pi projects. In this blog I wanted to look at a Turtle example that reads a temperature sensor and graphically shows the result as an “old style” thermometer. Getting Started Python Turtles is probably already loaded on your system, if not enter: pip install turtles The turtle library opens a graphic screen with the very center of the screen being (0,0). This is a little different than many other graphic systems (like PyGame) where the top left is (0,0). Different turtle objects can be defined and moved around the screen. A useful feature of turtles is that you can clear all the drawing from one turtle without effecting what the other turtles have done. (Note: this is useful in this thermometer example where we can have a static background turtle and a dynamic turtle that updates with new information). Below is an example with 3 turtles. The first turtle (t1) is set to red and then moved forward, left, forward and then sent home. The second turtle (t2) is set to purple and given a turtle symbol. The third turtle (t3) is set to green and then moved to a position and a thick circle is drawn. from turtle import * setup(500, 400) Screen() title(" 3 Turtles") # First red turtle goes forward, left, forward and back home t1=Turtle() t1.color("red") t1.forward(100) t1.left(90) t1.forward(100) t1.home() # Second purple turtle has a turtle shape t2=Turtle() t2.shape("turtle") t2.color("purple") t2.right(45) t2.forward(100) t2.left(90) t2.forward(100) # Third green turtle goes to a location and makes a thick circle t3=Turtle() t3.color("green") t3.pensize(10) t3.up() t3.goto(-100,-10) t3.down() t3.circle(80) Drawing an “Old Style” Thermometer For this project we wanted to draw an “old style” mercury thermometer, with a bulb of red mercury at the bottom and a tube above it. Using the simple turtle commands like : move, left, right, forward etc. is great to learn but it can be awkward for more difficult drawings. A more efficient approach is to define an array of x,y coordinates and then move to each position in the array. For example the upper tube can be drawn by: # Define an array of x,y coordinates for the top tube outline = ((0,-50),(25,-50),(25,210),(-25,210),(-25,-50),(0,-50)) for pos in outline: # move to each tube x,y point thermo.goto(pos) thermo.pendown() Circles are created using a turtle.circle(width) command. To fill an object or a group of objects a turtle.begin_fill() and a turtle.end_fill() set of command is used. For our example the filled circle for the bulb is created by: #() The complete code to draw the complete thermometer background would be: # Create a background for an "old style" thermometer from turtle import Turtle,Screen, mainloop import random, time # Define a Turtle object thermo = Turtle() thermo.penup() thermo.hideturtle() thermo.pensize(5) # Define an array of x,y coordinates for the top tube outline = ((0,-50),(25,-50),(25,210),(-25,210),(-25,-50),(0,-50)) for pos in outline: # move to each tube x,y point thermo.goto(pos) thermo.pendown() #() mainloop() Raspberry Pi Hardware Setup There are a number of different temperature sensors that can be used. For our example we used a low cost ($5) DHT11 temperature/humidity sensor. The DHT11 sensor that we used had 3 pins, (Signal, 5V and GND), and we wired the Signal pin to the Pi physical pin 7. There is a DHT temperature/sensor Python library that is installed by: sudo pip install Adafruit_DHT A Python DHT test program would be: #!/usr/bin/python import sys import Adafruit_DHT sensor_type = 11 # sensor type could also be 22, for DHT22 dht_pin = 4 # Note: BCM pin 4 = physical pin 7 humidity, temperature = Adafruit_DHT.read_retry(sensor_type, dht_pin) print( "Temp: ", temperature, " deg C") print( "humidity: ", humidity, " %") Turtle Thermometer Now for the final project we can start pulling things together. For the thermometer project we used 2 turtles, a static background turtle (thermo) and a dynamic turtle (bar). The bar turtle is cleared and redrawn in the drawbar() function. A screen object wn is used to resize the window and add a title. For testing a random integer can be used. This is also useful for checking the 0-40C range of the bar. The full code and an screen shot are below: from turtle import Turtle,Screen, mainloop import random, time import Adafruit_DHT # Update the temperature bar height and value def drawbar(temp): top = (-50 + 260 * temp/40) boutline = ((0,-50),(20,-50),(20,top),(-20,top),(-20,-50),(0,-50)) bar.penup() bar.clear() # clear the old bar and text bar.begin_fill() for pos in boutline: bar.goto(pos) bar.end_fill() bar.goto(30,top) bar.write(str(temp) + " C",font=("Arial",24, "bold")) # Setup a default screen size and Title wn = Screen() wn.setup(width = 500, height = 500) wn.title("RaspPi Temperature Sensor") # define a static thermo backgroup object and a dynamic bar object thermo = Turtle() thermo.penup() bar = Turtle() bar.color("red") bar.hideturtle() # define an array for the top tube outline = ((0,-50),(25,-50),(25,210),(-25,210),(-25,-50),(0,-50)) thermo.hideturtle() thermo.pensize(5) for pos in outline: thermo.goto(pos) thermo.pendown() # add some temperature labels thermo.penup() thermo.goto(50,-50) thermo.write("0 C") thermo.goto(50,210) thermo.write("40 C") # draw the filled bulb at the bottom thermo.goto(0,-137) thermo.pendown() thermo.pensize(5) thermo.color("black","red") thermo.begin_fill() thermo.circle(50) thermo.end_fill() # Update the temperature while True: humidity, temperature = Adafruit_DHT.read_retry(11, 4) # use a random number for testing #temperature = random.randint(0,40) drawbar(temperature) time.sleep(5) Final Comments Compared to other Python graphic libraries (like PyGame, Tkinter or Qt) Turtle graphics can be slow and perhaps limiting, but for kids Turtles projects are super easy to configure. If you are doing simple stuff Turtles requires a lot less code than the other graphic libraries (keyboard input is a good example of this). There are a lot of possible fun Raspberry Pi projects that can be done with Turtle. Some of the other projects that we have done include: - use a Wii remote to draw pictures - create a Turtle drawing as you drive a rover (show the path)
https://funprojects.blog/2019/03/26/python-turtles-on-raspberry-pi/
CC-MAIN-2022-40
refinedweb
1,106
54.93
Speech/speech activation - Try this. A file is recorded each 10 seconds (you can change it, max 60 seconds). A label is green while you record, is Orange to warn you it will change file in 3 seconds. File is flip/flop 0/1test.m4a. To stop, say "stop" and pray it works 😂 Of course, it is not perfect, if you change file just during a word is pronounced... import threading import speech import sound import time import os import ui class my_thread(threading.Thread): global my_ui_view,servers_thread,servers_speed def __init__(self,view,file): threading.Thread.__init__(self) self.file = file self.view = view def run(self): local = threading.local() local.file = self.file local.view = self.view def pr(l): local.view['tv'].text = local.view['tv'].text + l + '\n' #pr('recognize ' +local.file) try: local.t = speech.recognize(local.file,'fr') for local.m in local.t: if 'stop' in local.m[0].lower(): local.view.stop = True break pr(local.m[0]) except RuntimeError as e: pr('Speech recognition failed: '+str(e)) os.remove(local.file) #pr(local.file+' deleted') w,h = ui.get_screen_size() v = ui.View() v.background_color = 'white' l = ui.Label() l.frame = (10,10,100,20) v.add_subview(l) tv = ui.TextView(name='tv') tv.frame = (10,50,w-20,h-50-10) v.add_subview(tv) v.present('full_screen') j = 0 v.stop = False while not v.stop: file = str(j)+'test.m4a' l.text = file recorder = sound.Recorder(file) recorder.record() l.background_color = 'green' time.sleep(10) # duration of one file l.background_color = 'orange' time.sleep(3) l.background_color = 'red' recorder.stop() s = my_thread(v,file) s.start() j = 1 - j @rex_noctis, maybe @JonB and @Mederic could figure a way to use some kind of circular input audio buffer and continuous recording for this. Based on this thread they can go pretty deep on audio stuff, unfortunately in the wrong direction in that thread. @mikael I know but the problem, I think, is that speech recognition needs a file. I know that my little script is not the right solution, it is only to show him that an other thread may do the job while you record. Fwiw, you can overlap ping-ponging recorders so you never have gaps. This is what I mean. You would add your processing to the callback. but basically there are always two files recording, and one processing, so words will never be cut off. In other words file 1 might cut off "App", but file 2 started a little later, so would get the whole "Apple". obviously there would have to be other logic that switches to continuous record once the wake phrase is discovered.
https://forum.omz-software.com/topic/5277/speech-speech-activation/3
CC-MAIN-2020-45
refinedweb
448
71.51
Here it is. I am trying to get an output that shows the highest number in an array and what number this number is in the array. When you run it it shows the correct output forthe second two arrays but it is off on the first one. Even when I switched it to find the lowest it came out correct I don't know why it doesn't work for the first array. If you can determine why it would be greatly appreciated. Here is the code. #include <iostream> #include <iomanip> using namespace std; int FindIndexHighest(double [],const unsigned int ); int main() { double x[] = { 11.11, 66.66, 88.88, 33.33, 55.55 }; double y[] = { 9, 6, 5, 8, 3, 4, 7, 4, 6, 3, 8, 5, 7, 2 }; double z[] = { 123, 400, 765, 102, 345, 678, 234, 789 }; cout << fixed << setprecision(2); int index = FindIndexHighest (x, sizeof (x) / sizeof (x[0])); cout << "Array x: element index = " << index << " element contents = " << x[index] << '\n'; index = FindIndexHighest (y, sizeof (y) / sizeof (y[0])); cout << "Array y: element index = " << index << " element contents = " << y[index] << '\n'; cout << "Array z: element index = " << FindIndexHighest (z, sizeof (z) / sizeof (z[0])) << " element contents = " << z[FindIndexHighest (z, sizeof (z) / sizeof (z[0])) ] << '\n'; return 0; } int FindIndexHighest( double num[], const unsigned int SZ ) { int a = 0; double highest = num[ 0 ]; for( int i = 1; i < SZ; i++ ) { if( highest < num[ i ]) a = i; } return a; } Thanks Nate2430
https://cboard.cprogramming.com/cplusplus-programming/5643-probably-simple-problem.html
CC-MAIN-2017-13
refinedweb
241
51.41
18 December 2008 11:31 [Source: ICIS news] LONDON (ICIS news)--The Kuwaiti government will refer plans for the joint venture between Dow Chemical and Kuwaiti Petrochemical Industries Company (PIC) to the country’s top legal authority, the Fatwa and Legislation Department, state news agency KUNA said late on Wednesday. The move came after PIC said it would go ahead with the $7.5bn (€5.2bn) agreement, despite opposition by ?xml:namespace> Deputy Prime Minister Faisal al-Hajji said the joint venture project had on Sunday been referred to the audit bureau, which monitors state spending and revenues, KUNA reported. Dow signed the agreement with PIC on 1 December to create the $17.4bn petrochemicals joint venture, expecting the deal to close no later than 1 January 2009. PIC agreed to pay Dow $7.5bn to establish the new company, $2bn less than their initial agreement in December 2007. (
http://www.icis.com/Articles/2008/12/18/9180310/kuwait-refers-k-dow-deal-to-legal-authority-report.html
CC-MAIN-2014-52
refinedweb
149
53.71
I wan't to record the amount of code i write with sublime. How can i get the filetype of an open file? And is there away i can save the number to a file or storage? now i have this: - Code: Select all import sublime, sublime_plugin count = 0 class CodeStats(sublime_plugin.EventListener): clicks = 0 clicksTrigger = 10 count = 0 file_name = '' def on_modified(self, view): self.clicks += 1 self.count += 1 if self.clicks >= self.clicksTrigger: self.showStats() def showStats(self): print "code written", self.count self.clicks = 0 As of now i just call the print function every 10 keystrokes. Got any tips or ideas? Oh and is there away for me to code this is JS instead of Python?
http://www.sublimetext.com/forum/viewtopic.php?p=50203
CC-MAIN-2015-48
refinedweb
120
78.45
Hey Guys, Today we will gonna learn about how to implement custom user model in django in the middle of the ongoing project. I will assume you are using Postgres as database for your django project because this blog post will focus on implementing custom user model in django with Postgres databse. Let's get started.. - Find all the references(foreign keys/one-to-one/many-to-many) relationships to django's built in User model inside your codebase. - Replace all those references by using a generic way to access user model in django with settings.AUTH_USER_MODEL,For that you need to first import settings in all those files and then replace the references like below. from django.conf import settings # For example if you have a model with a foreign key relation to user like below class Example(models.Model): user = models.ForeignKey(User) # Then just change it to something like below class Example(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL) 3. Next find all references to django's built in User model inside any of the functions/methods of your codebase and replace them withget_user_model()like below. from django.contrib.auth import get_user_model # For example you have any function or method inside any of your file in codebase def get_all_users(): return User.objects.all() # Then just change it to something like below def get_all_users(): return get_user_model().objects.all() 4. Once you have done all of the steps above. Now you need to create a fresh app using command below. python manage.py startapp users Note: i named my new app as users but you can name it whatever you want. 5. Now open models.py in the users app and create a custom user model like below. from django.db import models from django.contrib.auth.models import AbstractUser class User(AbstractUser): class Meta: db_table = 'auth_user' Note: I have named the custom user model as User for the sake of simplicity but you can name it anything. Now pay attention to the db_table meta attribute which i have set to auth_user. I did that because in case you already have users in your postgres database and you want to retain those users, then you will need to set db_table to auth_user. 6. Now open settings.py and add users app in the INSTALLED_APPS like below. INSTALLED_APPS = [ # ... 'users', ] 7. Now also add AUTH_USER_MODEL in settings.py like below. AUTH_USER_MODEL = 'users.User' Note: In all these above cases i have used users as my app name, you will need to set them according to your app name. 8. Now create the initial migrations for the user model like below. python manage.py makemigrations Note: Now don't try to run python manage.py migrate at this point because it will create an Inconsistent Migration History in django_migrations table and all it will create non-existent content type in django_content_types table. We can fix this issue by tweaking the postres database tables by ourselves using queries. 9. Now open the terminal and go to the project directory where your manage.py exists and run the below commands one by one. echo "INSERT INTO django_migrations (app, name, applied) VALUES ('users', '0001_initial', CURRENT_TIMESTAMP);" | python manage.py dbshell echo "UPDATE django_content_type SET app_label = 'users' WHERE app_label = 'auth' and model = 'user';" | python manage.py dbshell Note: As you can see in both of these above command i have used users as my app name, make sure to change it in case your app name is different. 10. Now you can go ahead and run the below command. python manage.py migrate 11. Once the migrate runs successfully. Congrats!! Now you can go ahead and make changes to the custom user model like add a new field or override any existing field etc.
https://raturi.in/blog/introduce-custom-user-model-middle-project/
CC-MAIN-2021-43
refinedweb
627
65.22
Step-by-Step Procedure to Schedule Job In SAP HANA to execute Stored Procedure Hi Guys, There are many documents available on SAP HANA Job scheduling, but I didn’t find any proper document which says how to Schedule the execution of Stored Procedure in SAP HANA. Most of the time we get requirement from client, where we need to update a table(s) on specified time interval say week or month basis. So, here I come with an easy step-by-step process showing, how to schedule a job in SAP HANA which will auto-execute stored procedure after specified interval . Prerequisite: - Physical table with data should be available. - SAP HANA Developer Edition. - XS Engine should be in running state. - SPS07 or Higher version. Lets begin.. Note – First verify XS Engine is up and running by using following URL in your Internet Browser where HANA Client is installed. – (Replace ‘localhost‘ with IP. Contact Basis / HANA Admin Team to get IP) Result: If you get this page that means XS Engine is working fine. Lets move ahead.. Scenario – We will Create a stored Procedure which on execution update the records in table in SAP HANA Database.Then we schedule a job in XS Engine which will run stored procedure after fixed interval. Sample table present in SAP HANA System. In the above table, we will update price by 10% on selected date for selected state i.e STATE_ID_FK=1. Step 1: Create Store procedure to Update Records Create Procedure USP_UPDATE_PRICE as begin update “<<SCHEMA_NAME>>”.”<<TABLE_NAME>>” set price = price+(Price)*0.1 where STATE_ID_FK =1; end; Note : We have not used any date filter in where clause to make example simpler. Step 2 : Choose SAP HANA Development Perspective by using following navigation Step 3 : Create Project in SAP HANA Development Perspective mentioned as below Goto File -> New -> Project ‘New Project’ window will appear as below. 1.Select ‘XS Project’. 2. Click on Next push button. 1. Give the name of the project as ‘XS_Job_Learning‘ 2. Click on Next push button . 1. Select desired Repository Workspace 2. Click on Finish push button. Step 4: Now we will create following files in project ‘XS_Job_Learning’ - .xsaccess - .xsapp - MyJob.xsjob - MyFirstSourceFile.xsjs Add following code in respective files . xsaccess : { “exposed” : true, “authentication” : [ { “method” : “Form” } ] } .xsapp : No need to add any code. This file will be blank. MyFirstSourceFile.xsjs: function My_table() { var query = “{CALL <<YOUR_SCHEMA_NAME>>.<<YOUR_STORE_PROCEDURE_NAME>>}”; $.trace.debug(query); var conn = $.db.getConnection(); var pcall = conn.prepareCall(query); pcall.execute(); pcall.close(); conn.commit(); conn.close(); } MyJobs.xsjob : { “description”: “Job to Update MY_TABLE values”, “action”: “XS_job_learning:MyFirstSourceFile.xsjs::My_table”, “schedules”: [ { “description”: “Table will update after every 59 sec”, “xscron”: “* * * * * * 59” } ] } Note : “action” row in very important line in .xsjob file. It will search ‘XS_job_Learning‘ then ‘MyFirstSourceFile.xsjs‘ file and try to execute ‘My_table‘ function which will execute the created store procedure. If you want more information about Cron Job, then click on Cron – Wikipedia, the free encyclopedia Step 5 : Now, you have completed all your coding at desktop level. Move your files at repositories. Please follow below mentioned step for the same. Right click on Project Name – > Team – > Check Right click on Project Name – > Team – > Activate All Step 6 : Schedule job in XS engine : Please type<<Your_Path>> in browser. (This is XS_job_Learning Path in repository. For complete path, you can coordinate with your Basis / HANA admin team. ) Please login using appropriate credentials. Step 7 : After successful Login, You will find ‘XS_job_Learning’ – > MyJobs.xsjob in Application Objects window On above screen, please select ‘Active’ checkbox and click on ‘Save’ push button to activate job scheduling in SAP HANA. This completes the Job Scheduling process. After this, Job is auto-scheduled after specified time interval mentioned in MyJobs.xsjob . To unscheduled job, uncheck the ‘Activate’ checkbox and click on ‘Save’ push button. Observation – The scheduled job undergoes three step i.e. - SCHEDULED - RUNNING - SUCCESS which can be observed in Status Column . Under JOB LOG window By clicking on Refresh button, job execution is monitored. Step 8 : View result in in SQL Panel Results displays the updated price by 10% . Note : You need to provide following rights to execute job in xs engine to users. Enjoy….. Happy Learning Why write an XSJS that all it does is call the Stored Procedure. You can just schedule the Stored Procedure itself from the XSJOB. Hi Thomas, Thanks for your comment. I want to show one more way to schedule job here. I think you missed the point of the comment. The XSJS wrapper around the Stored Procedure is completely unnecessary. XSJOBs can directly call a Stored Procedure as well as XSJS services. It simply isn't needed to have the XSJS call the Stored Procedure as you are doing. Its adding overhead without value. Hi, I do understand your concern / point. Same method is used in SAP HANA developer document at page number 478 / topic 8.7.1.1 (). here, i just tired to make it simple to freshers. So, they can schedule job without any problem. >Same method is used in SAP HANA developer document at page number 478 / topic 8.7.1.1 No the same method isn't used. In the Developer Documentation they are calling an XSJS, yes; but that's to execute XSJS. If you want to call a Store Procedure you should simply schedule it directly within the XSJOB. >here, i just tired to make it simple to freshers. So, they can schedule job without any problem. But you are telling them the wrong thing. If they want to schedule a store procedure they don't need all the steps you are suggesting. You are over complicating things. Although I've been unable to convince you of this, I hope that others reading this blog will see the comments and realize that what you propose is incorrect and inefficient and ignore the content. Hi Thomas, what is the right syntax for the calling a stored procedure in an .xsjob file? I didn't not find anything in the documentation. I tried this: { "description": "Update information", "action": "CALL \"MY_SCHEMA\".\"MY_PROCEDURE\" ()", "schedules": [{ "description": "Update information every 10 second", "xscron": "* * * * * * 0:59/10" }] } ... this ... "action": "MY_SCHEMA.MY_PROCEDURE" ... and several another combinations. But every time I get: Error -> Parsing job action failed. regards Konstantin Definitely don't use the CALL and (). Just the procedure name. However you might need a synonym to remove the schema name if you are using non-repository procedures. For repository procedures its just <namespace>::procedure. I have an example here: Hi Thomas, Could you please guide me on using the procedure name in the action property of xsjob. As while giving the procedure I am getting the following error: Required object does not exist: either specify ‘schema’ in xsjob or provide a public synonym as job I tried giving the procedure name with schema, without schema and also tried with synonym but it’s it’s not. I tried ‘schema’ as a property but maybe it’s not there in xsjob. Kindly help in this, if you can provide me some documentation or some kind of syntax because I am not able to find this on the web. Thanking you, Manish Please see the comment right above this one. There is a video link which shows an example. Although I've been unable to convince you of this, I hope that others reading this blog will see the comments and realize that what you propose is incorrect and inefficient and ignore the content. Hi Thomas, I just wanted to check if there is anyway to set dependency between XS job and maestro job . My requirement is execute my XS job in System B ( Child Company ) after the completion of maestro job in System A ( Parent Company ). I have the sap documentation and could not find any valuable information on the same. Appreciate the feedback! Thanks Siva No there is no connection to a Maestro job. Hi, I have a procedure which I can run using CALL "I7xxxxx"."UPDATE_MARGIN"; I was to schedule it using xsjob. what should we write in the action inside xsjob file? I tried following and some variant of it as well, but didn't work. "action": "I7xxxxx::USP_UPDATE_MARGIN", Please suggest the syntax for "action" for store procedures. Thanks, Chirag Hi Chirag, Did you get the syntax for procedure ? My procedure statement is - call "SCHEMA1"."PK1.PK2::getdata"() I am not able to drive action tag Regards, Ashwini Hi Thomas, can I schedule a stored procedure that is submitted via a trigger. My stored procedure inputs are Contents package path and "browser" role. We are not giving the select to _SYS_BIC but want an automated way to update a "browser" role specific to the Contents package path. I have the stored procedure working but want a way for developers to submit it on demand. Our process prevents developers from previewing the views until the view is added to the "browser" role. Our developers are not allowed to update roles, that is a security team function only. Hi Thomas, I am trying to schedule the XSJOB running sqlscript and that seems pretty easy. But when I connect to XS Job Dashboard I update the configuration screen for the job but get a error message the Application Privilege is missing. I have the ...JobAdministrator role assigned to the ID. What other Application Privilege do I need. I am getting the below error when I go to Application Objects window and select my package which is 'XSJOB' here. I have got the below roles assisgned as well. Any idea why I could be getting this error. Check with security tram to get the roles This was sorted the same day. Thanks Hi All, I am very new to SAP HANA and I am wondering if I can read csv file from a package and then write back to csv file and then sent out csv file as an email attachment using xsjs and xsjob? is there any library to read/write csv file using xsjs? Thanks a lot Nice Blog. Thanks.
https://blogs.sap.com/2015/03/19/step-by-step-procedure-to-schedule-job-in-sap-hana-to-execute-stored-procedure/
CC-MAIN-2022-33
refinedweb
1,677
66.74
16 June 2008 22:26 [Source: ICIS news] HOUSTON (ICIS news)-- US base oils producer CITGO confirmed on Monday it would permanently shut down its Lake Charles lube plant in Louisiana prior to year end. “With regard to the 12,000 bbl/day base oil lubricant and wax plant, CITGO has initiated plans to cease production at the facility during the second and third quarters of this year,” company spokeswoman A Shawn Trahan said. The confirmation followed speculation that CITGO would exit the base oils business, following a letter the company sent to buyers earlier this year about reducing production. “I think everyone saw this coming but certainly not this soon,” a major contract buyer said. The decision was made in response to a continuing, downward trend in market conditions for CITGO’s products and an effort to remain competitive in the industry, Trahan said. CITGO, the Houston-based arm of ?xml:namespace> “Base oil producers are having their worst year in perhaps more than a decade as record oil prices cut into profits for refining,” a seller said. Many producers have chosen to divert feedstock to make other refined products rather than base oils. “But losing money is no small problem in the world of refining,” a distributor said. “We have now seen two base oil closures in the last 45 days.” CITGO follows Marathon, which told buyers in April it would close its CITGO said it is seeking potential buyers for the facility to better align its base oil production capacity with the needs of the finished lubricants marketplace. “This decision will better align CITGO base oil production … with the needs of the finished lubricants marketplace,” Trahan
http://www.icis.com/Articles/2008/06/16/9132739/citgo-confirms-us-base-oils-exit.html
CC-MAIN-2015-11
refinedweb
279
56.79
Convenience library for working with etags in fastapi Project description fastapi-etag Quickstart Basic etag support for FastAPI, allowing you to benefit from conditional caching in web browsers and reverse-proxy caching layers. This does not generate etags that are a hash of the response content, but instead lets you pass in a custom etag generating function per endpoint that is called before executing the route function. This lets you bypass expensive API calls when client includes a matching etag in the If-None-Match header, in this case your endpoint is never called, instead returning a 304 response telling the client nothing has changed. The etag logis is implemented with a fastapi dependency that you can add to your routes or entire routers. Here's how you use it: # app.py from fastapi import FastAPI from starlette.requests import Request from fastapi_etag import Etag, add_exception_handler app = FastAPI() add_exception_handler(app) async def get_hello_etag(request: Request): return "etagfor" + request.path_params["name"] @app.get("/hello/{name}", dependencies=[Depends(Etag(get_hello_etag))]) async def hello(name: str): return {"hello": name} Run this example with uvicorn: uvicorn --port 8090 app:app Let's break it down: add_exception_handler(app) The dependency raises a special CacheHit exception to exit early when there's a an etag match, this adds a standard exception handler to the app to generate a correct 304 response from the exception. async def get_hello_etag(request: Request): name = request.path_params.get("name") return f"etagfor{name}" This is the function that generates the etag for your endpoint. It can do anything you want, it could for example return a hash of a last modified timestamp in your database. It can be either a normal function or an async function. Only requirement is that it accepts one argument (request) and that it returns either a string (the etag) or None (in which case no etag header is added) @app.get("/hello/{name}", dependencies=[Depends(Etag(get_hello_etag))]) def hello(name: str): ... The Etag dependency is called like any fastapi dependency. It always adds the etag returned by your etag gen function to the response. If client passes a matching etag in the If-None-Match header, it will raise a CacheHit exception which triggers a 304 response before calling your endpoint. Now try it with curl: curl -i "" HTTP/1.1 200 OK date: Mon, 30 Dec 2019 21:55:43 GMT server: uvicorn content-length: 15 content-type: application/json etag: W/"etagforbob" {"hello":"bob"} Etag header is added Now including the etag in If-None-Match header (mimicking a web browser): curl -i -X GET "" -H "If-None-Match: W/\"etagforbob\"" HTTP/1.1 304 Not Modified date: Mon, 30 Dec 2019 21:57:37 GMT server: uvicorn etag: W/"etagforbob" It now returns no content, only the 304 telling us nothing has changed. Add response headers If you want to add some extra response headers to the 304 and regular response, you can add the extra_headers argument with a dict of headers: @app.get( "/hello/{name}", dependencies=[ Depends( Etag( get_hello_etag, extra_headers={"Cache-Control": "public, max-age: 30"}, ) ) ], ) def hello(name: str): ... This will add the cache-control header on all responses from the endpoint. Contributing See CONTRIBUTING.md Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/fastapi-etag/
CC-MAIN-2021-17
refinedweb
558
52.19
What does the below C program mean? void swap(int *x,int *y) { int t; t = *x; *x = *y; *y = t; } What does the below C program mean? void swap(int *x,int *y) { int t; t = *x; *x = *y; *y = t; } this is meaningless programme. to swap two variable, you have to pass their references not values. There is a star before each x & y. The program swaps(interchanges) the values of x and y (If there is a star, like you have mentioned), does nothing otherwise. This function should be called as swap(&x,&y). As pointers store the address of the variable they point. Getting inside the function, t will store the value stores at &x,address &x stores value stored at &y and finally &y will store t. if you want swapping in pointers we have to do this way i.e using there references whtat you did is writing the code with out using pointers.I think you have learn more about pointers in function passing refer below url for more information void swap(int *x,int *y) { int t; t=*x; *x=*y; *y=temp; } If you are facing markdown problems: You should try using escape character \ before typing any other special character. This will stop/escape the normal functioning of that special symbol.(i.e. to print the desired thing using pointers write \*t=\*x on a comment) PS: To copy paste a code in a post use 101010 formatting in formatting toolbar.This will take care of all necessities/symbols in a code. About your function: here is a basic-prototype to swap two numbers. If a function is used to swap numbers then that function should receive address of both numbers(call by address) otherwise it’s like making changes in a photocopy (call by value). #include <stdio.h> void swap(int *x,int *y) { int t; t = *x; *x = *y; *y = t; } int main(void) { int x,y; x=5; y=6; swap(&x,&y); printf("%d %d",x,y); return 0; } It just swaps the values of a and b but there is no use as they are swaped local to that function if you want to swap them in the main function also then you should pass there address using pointers or you should pass there references in c++ This will swap x and y but for parameters x and y (call by value) and this changes will not be reflected in the caller function. If you want it to be reflected call it by reference. Call by Value If data is passed by value, the data is copied from the variable used in for example main() to a variable used by the function. So if the data passed (that is stored in the function variable) is modified inside the function, the value is only changed in the variable used inside the function. Let’s take a look at a call by value example:(). Call by Reference If data is passed by reference, a pointer to the data is copied instead of the actual variable as is done in a call by value. Because a pointer is copied, if the value at that pointers address is changed in the function, the value is also changed in main(). Let’s take a look at a code example:. source-
https://discusstest.codechef.com/t/pointers-in-c/13139
CC-MAIN-2021-31
refinedweb
559
75.74