text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
New York Washington, D.C. Los Angeles Palo Alto London Paris Frankfurt Tokyo Hong Kong Beijing Melbourne Sydney April 14, 2016 Related-Party Debt / Equity Regulations IRS Issues Proposed Regulations Intended to Limit Earnings Stripping but Which—if Finalized—Would Broadly Change the U.S. Tax Treatment of Related-Party Indebtedness SUMMARY On April 4, 2016, the IRS and Treasury Department issued proposed regulations (the “Proposed Regulations”) that would—if finalized in their current form—treat related-party debt as equity for U.S. tax purposes in certain circumstances. Although the Proposed Regulations were issued concurrently with temporary regulations aimed at so-called “inversion” transactions, the Proposed Regulations would have a much broader impact, and would—in cases where a debt instrument is recharacterized—often increase the U.S. tax liability of the affected group. As discussed further below, the Proposed Regulations introduce three new sets of rules. The first aspect of the Proposed Regulations (the “Distributed Debt Rules”) is intended to modify the tax treatment of transactions that can create related-party leverage without an investment of new capital, and applies to instruments that are distributed to a related party, issued in exchange for stock of a related party, or issued in other transactions (including certain asset reorganizations and transactions that may fund distributions) that the IRS and Treasury Department believe can be used for similar purposes. The second element of the Proposed Regulations (the “Documentation Requirements”) introduces new recordkeeping and similar rules for issuers and holders of related-party debt. The third set of rules within the Proposed Regulations (the “Bifurcation Authority”) permits the IRS to characterize an instrument as partly indebtedness and partly equity for U.S. federal income tax purposes. The Proposed Regulations would affect both foreign-parented groups that make inbound U.S. investments and U.S.-parented groups that own foreign subsidiaries. While the Proposed Regulations would not affect debt obligations among members of a group of U.S. corporations that files a consolidated U.S. federal income tax return, several common types of entities—such as regulated investment companies (“RICs”), real estate investment trusts (“REITs”) and certain insurance companies—cannot be included on consolidated returns. Although the Proposed -2- Related-Party Debt / Equity Regulations April 14, 2016 Regulations are intended to limit certain transactions that the IRS and Treasury Department believe are motivated largely by tax considerations (such as internal restructurings that allow foreign parent corporations to create high levels of related-party leverage within their U.S. groups to “strip” earnings without investing new capital), the Proposed Regulations are drafted broadly, and may be pertinent to routine funding arrangements and other transactions that are not considered tax-driven. We understand that the IRS and Treasury Department are seeking to finalize the Proposed Regulations before September 5, 2016. If the Proposed Regulations are finalized in their current form, the Distributed Debt Rules would generally apply (beginning 90 days after final regulations are issued) to instruments issued on or after April 4, 2016. The Documentation Requirements and Bifurcation Authority are not proposed to take effect until the Proposed Regulations are published in final form, but would take effect immediately on that date and generally must be satisfied within 30 days of the date when an obligation is issued. BACKGROUND The Proposed Regulations are issued under Section 385 of the U.S. Internal Revenue Code (the “Code”), which authorizes the U.S. Treasury Department to promulgate regulations setting forth factors for determining whether an interest in a corporation constitutes debt or equity and provides a nonexclusive list of factors that could be included in such regulations. Although regulations were previously issued under this provision in 1980, the 1980 regulations were withdrawn (after several revisions) in 1983. DISCUSSION A. EFFECT OF THE PROPOSED REGULATIONS If a debt instrument does not satisfy the requirements set forth in the Proposed Regulations, the Proposed Regulations would generally treat that obligation as equity, a result that can have wideranging, potentially uncertain, and often undesirable tax consequences. For example, interest on a debt obligation issued by a U.S. subsidiary to a foreign parent would not be deductible if that note were characterized as equity. When received, such interest would also potentially be treated as a dividend that could be subject to U.S. withholding tax,1 and a further deemed dividend (which could also be subject to U.S. withholding tax) could arise when the recharacterized debt instrument matures, is redeemed or is sold. Likewise, if a debt obligation issued by a foreign subsidiary to its U.S. parent was characterized as equity, all or a portion of the proceeds from the subsequent sale, redemption or maturity of that instrument could be treated as a taxable dividend. Equity treatment 1 A 30% U.S. withholding tax is generally imposed on dividends arising from sources within the United States. Although this 30% rate may be reduced (including, in some cases, to 0%) by a treaty, such reduction or elimination is available only if a treaty exists between the United States and the country where the dividend recipient resides and the requirements of that treaty are satisfied. Some, but not all, U.S. tax treaties provide for a 0% withholding rate on certain dividends, but such 0% withholding is generally available only if strict ownership criteria (which— for example—may require the non-U.S. recipient to have directly owned at least 80% of the voting stock of the U.S. payor for a specified period of time) are met. -3- Related-Party Debt / Equity Regulations April 14, 2016 could also be relevant to instruments that are both issued and held by foreign subsidiaries of U.S. corporations. For example, the status of an obligation as debt or equity can affect the allocation of tax attributes (such as “earnings and profits”) that are relevant to a U.S. parent’s tax liability under the “controlled foreign corporation” rules. Recharacterization of debt issued by a taxable REIT subsidiary to a REIT could have very material adverse consequences to the REIT.2 An obligation could be recharacterized under the Proposed Regulations without regard to how “debtlike” that instrument would otherwise be, and irrespective of whether the obligation was issued for substantial business purposes. By contrast, an obligation that is not recharacterized as equity by the Proposed Regulations would not necessarily be respected as indebtedness for U.S. tax purposes; rather, in such cases, general debt / equity principles (as developed in common law and administrative guidance)3 would apply to determine whether that instrument is treated as debt or equity. B. GENERAL SCOPE OF THE PROPOSED REGULATIONS In general, the Proposed Regulations are relevant only to obligations that exist between members of what the Proposed Regulations refer to as an “expanded group”. An “expanded group” generally includes a parent corporation, together with any corporation that is at least 80% owned (by either vote or value, and directly or indirectly) by that parent corporation.4 Although this definition includes both U.S. and foreign corporations, the Proposed Regulations treat domestic corporations that file a consolidated U.S. tax return (a “consolidated group”) as a single entity, meaning that the Proposed Regulations generally will not affect debt instruments that exist entirely between members of a consolidated group. It should, however, be noted that certain special types of U.S. domestic entities, such as REITs, RICs and certain insurance companies cannot be included in a consolidated group. 2 REITs are permitted to own “taxable REIT subsidiaries,” which may engage in activities that a REIT may not perform directly, although no more than a specified percentage of a REIT’s assets (25% currently, but scheduled to decrease to 20% starting in 2018) may consist of interests in “taxable REIT subsidiaries”. If, for example, a taxable REIT subsidiary were to distribute a note secured by the taxable REIT subsidiary’s real estate assets (e.g., to limit the value of the REIT’s interests in the subsidiary), the distributed mortgage debt could be recharacterized as equity. Accordingly, interest on the debt would be treated as a dividend that is not deductible by the taxable REIT subsidiary. Additionally, such interest could lose its status as qualifying REIT income under the so-called “75% gross income test”, and the loan would be treated as an additional investment in the taxable REIT subsidiary. 3 Cases and other guidance discussing the factors considered in traditional debt / equity law include Notice 94-41, 1994-1 C.B. 357; John Kelley Co. v. Comm’r, 326 U.S. 521, 526 (1946); Estate of Mixon v. United States, 464 F.2d 394, 404 (5th Cir. 1972); Gilbert v. Comm’r, 248 F.2d 399, 402-03 (2d Cir. 1956). 4 More technically, the Proposed Regulations define an “expanded group” to mean an “affiliated group” (as defined in Section 1504(a) of the Code), determined (i) without regard to paragraphs 1 through 8 of Section 1504(b) of the Code (which exclude foreign corporations and certain special types of corporations, such as certain insurance companies, from an “affiliated group”); (ii) by giving regard to indirect ownership (which is determined under the rules of Section 304(c)(3) of the Code); and (iii) setting the ownership threshold at 80% of the voting power or value (rather than 80% of the voting power and value). See Prop. Treas. Reg. § 1.385-1(b)(3). -4- Related-Party Debt / Equity Regulations April 14, 2016 Accordingly, debt obligations issued by REITs (and taxable REIT subsidiaries), RICs, and certain insurance companies may be within the scope of the Proposed Regulations. Additionally, the Proposed Regulations would affect “controlled partnerships” (i.e., partnerships in which at least 80% of the interests in profits or capital are owned by members of an expanded group)5 and “disregarded entities” owned by members of an “expanded group”. C. DISTRIBUTED DEBT RULES The Distributed Debt Rules provide that a debt instrument would be treated as equity for U.S. federal income tax purposes if the instrument is either described by what the Proposed Regulations refer to as the “general rule” or is covered by what the Proposed Regulations term the “funding rule”. Under the “general rule”, a debt instrument would ordinarily be characterized as equity if the obligation is issued by a corporation (or treated as issued by a corporation) to a member of the issuer’s “expanded group” and the instrument is issued: (i) in a distribution with respect to stock; (ii) in exchange for “expanded group” stock (subject to a limited exception for “exempt exchanges”),6 or (iii) in exchange for property in an internal restructuring that is treated as an asset reorganization for U.S. federal income tax purposes (including an “A”, “C”, “D”, “F”, or “G” reorganization). Therefore, for example, if a U.S. subsidiary were to issue a note to its foreign parent as a distribution, that note would typically be treated as equity under the general rule, as would a debt obligation issued as consideration in a “brother / sister” stock sale or a debt instrument issued as consideration in an internal restructuring (that is treated as an acquisitive “D” reorganization under U.S. federal income tax law) that occurs within an “expanded group”. Under the “funding rule”, a debt instrument would generally be regarded as equity to the extent the instrument was issued by a corporation (the “funded member”—a term that includes both the member itself and any predecessor or successor) to a member of the funded member’s “expanded group” in exchange for property, with a principal purpose of funding a transaction similar to those covered by the “general rule”. Moreover, the Proposed Regulations include a “per se rule” that deems any debt instrument (other than certain debt instruments issued both in the ordinary course of business and in connection with the purchase of property or services in non-capital transactions) to be issued with the required “principal purpose” if the debt instrument is issued during the period beginning 36 months before the date of a relevant distribution or acquisition, and ending 36 months after the date of a relevant distribution or acquisition.7 Accordingly, if a U.S. subsidiary were to distribute cash to its foreign parent and the U.S. subsidiary had borrowed cash, within the preceding or succeeding 36 months, from another “expanded group” member, the debt instrument would be within the scope of 5 See Prop. Treas. Reg. § 1.385-1(b)(1). 6 An “exempt exchange” includes many asset reorganizations. However, such “exempt exchanges” may nevertheless be reorganizations that are subject to the “asset reorganization” rule (which does not contain a similar allowance for “exempt exchanges”). 7 See Prop. Treas. Reg. § 1.385-3(b)(3)(iv)(B). -5- Related-Party Debt / Equity Regulations April 14, 2016 the “funding rule”, even if the distribution and borrowing were undertaken for significant (and unrelated) non-tax business reasons. In addition to the above, the Distributed Debt Rules include an anti-abuse provision, which treats a debt obligation (or other instrument)8 as stock if the instrument “is issued with a principal purpose of avoiding the application of” the Distributed Debt Rules.9 The Distributed Debt Rules also provide that in applying the “general rule” and the “funding rule”, a “controlled partnership” is treated as an aggregate of its partners, so that each corporate partner that is a member of the expanded group that controls the partnership is treated as having issued a portion of the debt actually issued by the partnership. If indebtedness issued by a “controlled partnership” is recharacterized and deemed issued by a corporate partner, the holder of the recharacterized instrument is treated as holding stock in that partner.10 It is not entirely clear how these rules will interact with the rules in the Code that address partnerships more generally and—as noted below—the IRS and Treasury Department have requested comments on this subject. There are three principal exceptions to the Distributed Debt Rules. The first exception reduces the aggregate amount of distributions or acquisitions that are subject to the “general rule” or the “funding rule” by an amount equal to the current-year “earnings and profits” of the relevant member of an “expanded group”.11 To the extent that a member’s current-year “earnings and profits” are insufficient to cover all debt that could be recharacterized by the Distributed Debt Rules, this exception is applied to the first instruments issued in that year (an approach that differs from the rules that determine whether a distribution represents a dividend, which apportion “earnings and profits” ratably among all distributions made during the year). The second principal exclusion in the Distributed Debt Rules is a “threshold exception”, under which the Distributed Debt Rules do not apply unless and until the amount of indebtedness held by members of an “expanded group” that would be recharacterized if the Distributed Debt Rules were to apply exceeds $50 million (when measured by adjusted issue price). Third, the Distributed Debt Rules include an exception for “funded acquisitions of subsidiary stock by issuance” which—for example—allows one entity within an “expanded group” to lend cash to a “funded member” so the relevant cash can be used to subscribe for shares in a subsidiary.12 For the “funded acquisitions of subsidiary stock” exception to apply, the transferor must hold (either directly or 8 The preamble to the Proposed Regulations indicates that a nonperiodic swap payment could be such an instrument. 9 See Prop. Treas. Reg. § 1.385-3(b)(4). Although the Proposed Regulations include several examples of transactions that might be subject to the anti-abuse rule (e.g., a debt instrument that is issued to a non-member of an “expanded group” that later becomes part of the taxpayer’s “expanded group”), the precise scope of this rule is unclear. 10 Similarly, a debt instrument that is both recharacterized by the Distributed Debt Rules and issued by a “disregarded entity” is treated as stock in the owner of the “disregarded entity”. 11 See Prop. Treas. Reg. § 1.385-3(c)(1). Although the Proposed Regulations (and the preamble) do not make specific statements on this point, it is presumably irrelevant whether or not a CFC’s current-year “earnings and profits” are subpart F income. 12 The subsidiary, however, would be treated for these purposes as the “successor” of the funded member, so that the “funding rule” could apply if the subsidiary made a relevant distribution or acquisition. -6- Related-Party Debt / Equity Regulations April 14, 2016 indirectly) more than 50% of the voting power and value of the stock issuer, both immediately after the acquisition and for at least 36 months thereafter. The Distributed Debt Rules also do not apply in cases of attempted “affirmative use” (i.e., entering into a transaction with a principal purpose of reducing federal tax liability by way of a recharacterization that would occur under the Distributed Debt Rules). In general, if a debt instrument is recharacterized under the Distributed Debt Rules, that debt instrument is treated as equity from the instrument’s date of initial issuance.13 However, in cases where a debt instrument is treated as funding an acquisition or distribution that takes place in a taxable year (of the issuer) following the year when the obligation was initially issued, the relevant instrument is initially respected as indebtedness and is then deemed exchanged for equity on the date of the relevant acquisition or distribution in a transaction in which the holder’s amount realized is equal to the holder’s basis in the instrument (meaning that generally, no gain or loss—other than foreign exchange gain or loss—would be recognized).14 Similarly, on the date when the “threshold exception” ceases to apply, all instruments that were previously respected as indebtedness are deemed exchanged for equity. If a debt instrument that is recharacterized under the Distributed Debt Rules leaves an “expanded group” (either because the instrument is transferred to a nonmember or because the issuer and holder cease to be members of the same “expanded group”), the issuer of the obligation is deemed— immediately prior to this event—to issue a new debt instrument in exchange for the debt instrument that was previously treated as stock. By contrast to the rules discussed above that govern deemed conversions of debt instruments to equity, the Proposed Regulations do not contain a rule that limits the recognition of gain, loss, or income when an instrument subject to the Distributed Debt Rules is deemed to be re-exchanged for a debt instrument. D. DOCUMENTATION REQUIREMENTS The Documentation Requirements are relevant to what the Proposed Regulations refer to as an “expanded group instrument” (or an “EGI”). An EGI is generally defined15 as an instrument denominated as indebtedness that is issued16 by one member of an “expanded group” (or a “controlled partnership”) and held by another member of that “expanded group” (or a “controlled partnership”). An anti-avoidance rule also authorizes the IRS to treat an obligation denominated as indebtedness as an EGI if that obligation is not an EGI but was issued “with a principal purpose of avoiding the purposes” of the Documentation Requirements. Obligations that do not take the form of a debt instrument (such as, for example, sale / repurchase transactions) are not treated as EGIs (and 13 See Prop. Treas. Reg. § 1.385-3(d)(1)(i). 14 See Prop. Treas. Reg. § 1.385-1(c). 15 See Prop. Treas. Reg. § 1.385-2(a)(4)(ii). 16 The Proposed Regulations clarify that a person can be an “issuer” of an EGI even if that person is not the primary obligor. However, a guarantor that is not expected to be the primary obligor on an instrument is not considered an issuer. See Prop. Treas. Reg. § 1.385-2(a)(4)(iii). -7- Related-Party Debt / Equity Regulations April 14, 2016 therefore are generally not subject to the Documentation Requirements in their current form); however, the preamble to the Proposed Regulations solicits comments on which such instruments should be subject to similar rules (and on the documentation requirements that should apply to such instruments). As with the Distributed Debt Rules, EGIs that do not satisfy the Documentation Requirements are generally classified as equity for U.S. federal income tax purposes, without regard to how such obligations would be treated under general tax principles. By contrast to the “aggregate” approach taken by the Distributed Debt Rules (which treats recharacterized debt as issued by a flowthrough entity’s owner or owners), however, an EGI that is issued by a “controlled partnership” or a “disregarded entity” that does not satisfy the Documentation Requirements is treated as an equity interest in the issuing entity. The Documentation Requirements provide that issuers and holders of EGIs must establish and maintain records of the following: (i) an unconditional and legally binding obligation (on behalf of the issuer) to pay a sum certain on demand (or on one or more fixed dates); (ii) that the holder of the EGI has the rights of a creditor; (iii) that as of the date when the EGI was issued, the issuer had a reasonable expectation that the EGI would be repaid; and (iv) that, subsequent to the date when the EGI was issued, the issuer and holder behaved in a manner consistent with a debtor / creditor relationship. In general, the records required by the Documentation Requirements must be prepared within 30 days of the “relevant date”,17 although the Proposed Regulations allow issuers and holders 120 days to memorialize events supporting the presence of a debtor / creditor relationship. The Documentation Requirements primarily relate to how groups must analyze and substantiate their related-party obligations (rather than the legal relationship that is created by that obligation), but the Documentation Requirements also include prescriptive elements. For example, the Documentation Requirements indicate that the rights of a creditor “must” include a superior right to shareholders to share in the assets of the issuer in case of a dissolution, and state that a debt instrument “must” (except when special rules—applicable to revolving credit facilities, cash pooling structures and similar arrangements—modify this requirement)18 be memorialized on written documentation 17 For documentation that relates to the holder’s rights, the “relevant date” is generally the later of the date when the instrument is originally issued and the date when the instrument becomes an EGI. For records that relate to the issuer’s expected ability to repay an obligation, the date (if any) when the instrument is deemed reissued as a result of a “significant modification” is also a “relevant date”. An increase in the maximum principal amount of a revolving credit facility, cash pooling arrangement, or similar facility is also considered a “relevant date” for “ability to repay” purposes. The “relevant date” for records that relate to whether the issuer and holder conduct their affairs in a manner consistent with the presence of a debtor / creditor relationship is ordinarily the due date (in the case of payments) or the date of the event (in the case of an acceleration, event or default or similar occurrence). See Prop. Treas. Reg. § 1.385-2(b)(3)(ii). 18 In respect of revolving credit facilities and open accounts, the Proposed Regulations indicate that the initial principal balance of an EGI need not be evidenced by a separate note or other writing, but that revolving credit arrangements must otherwise satisfy the Documentation Requirements (for example, through evidence contained in board resolutions, credit agreements, omnibus agreements, security agreements and similar instruments). Likewise, cash pooling and similar arrangements are considered to meet the requirement that a “sum certain” be payable if the material documentation governing the ongoing operations of the arrangement, including any -8- Related-Party Debt / Equity Regulations April 14, 2016 establishing that the issuer has entered into an unconditional and legally binding obligation to pay a sum certain on demand or at one or more fixed dates. Additionally, the Documentation Requirements provide that creditors’ rights “typically include, but are not limited to” a right to accelerate an obligation or trigger an event of default if a payment of interest or principal is not made when due. Presumably, the Proposed Regulations’ use of the word “typically” in this context indicates that a debt instrument that does not provide for an acceleration right could—in some cases—satisfy the Documentation Requirements. However, the circumstances in which this might occur are unclear. Unless an exemption is relevant, the Documentation Requirements apply to all EGIs that exist within an “expanded group”, and the Documentation Requirements are therefore applicable without regard to the amount of related-party indebtedness that exists within that group, the size or term of a particular EGI, or a group’s business purposes for issuing and funding an EGI. In general, the Proposed Regulations require issuers and holders to maintain the records specified in the Documentation Requirements until the statute of limitations expires for all U.S. federal income tax returns with respect to which the treatment of the documented EGI is relevant. As noted above, the Documentation Requirements contain certain limited exceptions. First, the Documentation Requirements do not apply to “expanded groups” that both do not include a publicly traded member,19 and have both: (i) assets (as reported on all applicable financial statements)20 that are less than or equal to $100 million; and (ii) annual revenues that are less than or equal to $50 million. The Documentation Requirements also do not apply in cases of “affirmative use”—that is, if a taxpayer’s failure to satisfy their provisions was principally-motivated by a tax-avoidance purpose. Additionally, the Documentation Requirements contain a narrow exception for failures that are the result of “reasonable cause”. E. BIFURCATION AUTHORITY The Proposed Regulations grant the IRS authority to bifurcate instruments issued within a “modified expanded group” (which is defined in the same manner as an “expanded group”, except that the ownership threshold for a “modified expanded group” is 50%, rather than 80%) into separate debt and equity components. Under the Proposed Regulations, this authority exists ”.21 agreements with entities that are not members of the expanded group, is otherwise compliant with the Documentation Requirements. 19 Whether an entity is “publicly traded” for this purpose is defined by reference to Treas. Reg. § 1.1092(d)–1(b). 20 An “applicable financial statement” includes a financial statement prepared during the three years preceding when an instrument becomes an EGI that is either: (i) required to be provided to a government agency (other than for tax purposes) or (ii) an audited financial statement used for a substantial non-tax purpose (including credit purposes and shareholder reporting). 21 See Prop. Treas. Reg. § 1.385-1(d). -9- Related-Party Debt / Equity Regulations April 14, 2016 F. APPLICATION TO CONSOLIDATED GROUPS As noted above, the Proposed Regulations generally do not apply within consolidated groups, and rather treat all members of a consolidated group as a single corporation.22 The Proposed Regulations include transition rules that govern cases where an entity or debt instrument subject to the Distributed Debt Rules enters or leaves a consolidated group. G. EFFECTIVE DATES The Documentation Requirements and the Bifurcation Authority are generally proposed to apply to instruments issued (or deemed issued) on or after the date when the Proposed Regulations are published as final regulations in the Federal Register (the “finalization date”), and to instruments issued (or deemed issued) before that date, as a result of a retroactive entity classification election that is filed on or after the finalization date. According to the Proposed Regulations, the Distributed Debt Rules will generally apply to instruments issued (or deemed issued) on or after April 4, 2016, and to instruments treated as issued on an earlier date, as a result of an entity classification election filed on or after April 4, 2016. However, in evaluating whether the “funding rule” applies, the Distributed Debt Rules disregard any distribution, acquisition or funding transaction that took place prior to April 4, 2016. To the extent that an instrument is issued on or after April 4, 2016 (but before the finalization date) and would be characterized as equity under the Distributed Debt Rules, general tax principles would determine the treatment of that instrument during the interim period. However, 90 days after the finalization date, all such instruments would be deemed exchanged for equity, in a transaction in which the holder’s amount realized was equal to the holder’s basis in the instrument (meaning that generally, no gain or loss—other than foreign exchange gain or loss—would be recognized). It is noteworthy that the Proposed Regulations do not contain an exception for transactions that are completed on or after the relevant effective date, but were completed pursuant to a binding commitment that existed prior to the relevant effective date. H. ANALYSIS AND IMPACT 1. Distributed Debt Rules The Distributed Debt Rules have the potential to fundamentally change the way in which related groups are required to conceptualize related-party funding transactions. In many cases, the Proposed Regulations lead to results that may seem counterintuitive. For example, it has long been generally understood that if a corporation distributes a debt obligation, that transaction is a distribution for U.S. tax purposes (and therefore represents a dividend to the extent of the distributing corporation’s current-year and accumulated “earnings and profits”), and that likewise, a sale of stock between two “brother-sister” corporations (with debt as the consideration) can result in a deemed dividend because of the application of the “related corporation” rules of the Internal Revenue Code.23 It appears, 22 See Prop. Treas. Reg. § 1.385-1(e). 23 See Section 304 of the Code. -10- Related-Party Debt / Equity Regulations April 14, 2016 however, that under the Proposed Regulations, such transactions may not result in immediate dividends, even in cases where available “earnings and profits” are present. Rather, a corporation’s distribution of a note that is subject to the Distributed Debt Rules may, depending on the underlying facts,24 be a non-taxable distribution of stock. Similarly, a sale of stock between two “brother / sister” corporations (with debt, that is recharacterized under the Distributed Debt Rules, given as the consideration) would appear to generally be a taxable stock sale (and not subject to potential recharacterization as a distribution).25 By contrast, the redemption, maturity or sale of a recharacterized debt instrument appears to result in a redemption of that instrument (either directly, in the case of redemption or maturity, or as a result of the deemed exchange that occurs under the Distributed Debt Rules when a recharacterized obligation leaves the relevant “expanded group”).26 Although in some cases, it might be possible to argue that such a redemption is “not essentially equivalent to a dividend” or otherwise eligible for sale-orexchange treatment, it seems prudent to assume (particularly in cases where the issuer is a wholly owned subsidiary) that the disposition or maturity of a recharacterized instrument will result in a distribution that would be taxable as a dividend (and potentially subject to outbound U.S. withholding tax if received by a foreign holder) to the extent of the issuer’s current and accumulated “earnings and profits”.27 The potential impact of the Distributed Debt Rules is wide-reaching. The preamble to the Proposed Regulations indicates that the Distributed Debt Rules were motivated, at least in part, by concerns that the transactions identified in the Distributed Debt Rules may allow foreign-parented groups (including, but not limited to, parents of inverted groups) to increase the level of related-party debt carried by their U.S. subsidiaries without making new capital investments in the United States. The preamble also states that the IRS and Treasury Department have concerns that U.S.-parented groups could use similar transactions to repatriate untaxed foreign profits to the United States. Although the Distributed Debt Rules undoubtedly affect many transactions that might be viewed as tax-motivated,28 the Distributed Debt Rules also have the potential to impact transactions that are driven by significant non-tax considerations. For example, the Distributed Debt Rules may affect the various planning efforts being undertaken by foreign banking organizations, including to restructure their U.S. subsidiaries in order to comply with the Federal Reserve’s U.S. intermediary holding company (“IHC”) 24 In particular, the main relevant facts would be whether the distribution is otherwise taxable under the “stock dividend” rules of Section 305(b) or Section 305(c) of the Code. 25 In some situations, such sales might—after giving effect to the Distributed Debt Rules—fall within the “Section 351” rules that can allow contributions to be made to a corporation on a tax-free basis. However, obligations recharacterized under the Distributed Debt Rules are likely to have terms that cause such instruments to be treated as “nonqualified preferred stock” that generally cannot be received tax-free under Section 351. 26 Such redemptions are generally governed by Section 302 of the Code. 27 See, e.g., United States v. Davis, 397 U.S. 301 (1970). 28 Examples of such transactions could include inversions, along with “domestication” transactions in which a foreign parent reincorporates a leveraged foreign holding company (which owns the parent’s U.S. group) to the United States. -11- Related-Party Debt / Equity Regulations April 14, 2016 requirements, the Federal Reserve’s recent proposals for total loss absorbing capacity at IHCs, and other bank regulatory requirements. Similarly, the Distributed Debt Rules may apply to routine integration planning transactions (such as debt distributions and asset reorganizations in which at least part of the consideration is not stock) in which a U.S. target entity assumes a pro rata portion of the acquisition debt incurred by a foreign acquiror. Because, as noted above, the Proposed Regulations apply to entirely domestic groups that do not file consolidated tax returns, the Distributed Debt Rules likewise have the potential to affect transactions between a REIT and its taxable REIT subsidiary and distributions from taxable REIT subsidiaries to operating partnerships. The extent of the Distributed Debt Rules means that “expanded groups” will need to carefully monitor internal restructurings and their treasury functions to avoid inadvertent transactions that lead to undesirable U.S. federal income tax consequences. In some cases, it may remain possible to achieve the commercial results that are desired from transactions that would otherwise be within the scope of the Distributed Debt Rules by, for example, having subsidiaries obtain external funding directly, rather than routing leverage that would be covered by the “funding rule” through a related party. Such planning may, however, have further commercial implications. 2. Documentation Requirements The ultimate impact of the Documentation Requirements is likely to be context-dependent. For example, in the context of large related-party financing transactions that occur outside the ordinary course of business, many corporations consider the preparation of records that are broadly consistent with the Documentation Requirements to be good practice. Additionally, the Documentation Requirements include a degree of flexibility, in that the Documentation Requirements generally prescribe that an issuer’s documentation must be sufficient to establish relevant facts (e.g., that there was a reasonable expectation that an obligation would be repaid) and do not require that specific analyses (such as a third-party credit study) be performed in all cases. Accordingly, although groups that undertake major internal financings will need to review their recordkeeping, analytical and retention policies to ensure that the relevant procedures are consistent with the Documentation Requirements, the ultimate impact of the Documentation Requirements on high-value transactions may be limited. Nevertheless, it is not atypical for certain groups to use streamlined analytical and recordkeeping procedures for routine, lower-value internal financing transactions. For example, a group may have a general policy of funding all subsidiaries (or all subsidiaries within a particular business unit) with a pre-determined mixture of debt and equity, and therefore may not closely examine each subsidiary’s business plan, cash flow projections or other financial metrics in connection with each related-party advance. It is possible that such simplified procedures would not be sufficiently robust to satisfy the Documentation Requirements. Additionally, as discussed above, certain aspects of the Documentation Requirements (such as the requirement that a debt instrument include a superior right to shareholders to share in the assets of the issuer in case of dissolution) are substantive. Although “standard” debt instruments issued within -12- Related-Party Debt / Equity Regulations April 14, 2016 a related group will often satisfy these prerequisites, the prescriptive aspects of the Documentation Requirements may have additional significance in the context of special types of debt instruments, such as obligations intended to satisfy “total loss absorbing capacity” requirements. Finally, the Documentation Requirements are proposed to take effect immediately on the date when the Documentation Requirements are published as final regulations, and generally—once effective— must be satisfied within 30 days of the date when an instrument is issued. In view of the negative consequences associated with issuing related-party debt that does not comply with the Documentation Requirements, it may be prudent for groups to review (and, if necessary, update) their procedures for documenting and substantiating related-party obligations before the Proposed Regulations are finalized. 3. Bifurcation Authority Although the concept of the Bifurcation Authority is in some senses straightforward, ambiguities also exist in this aspect of the Proposed Regulations. For example, the Proposed Regulations do not include a well-defined standard that delineates when an instrument may be bifurcated. Rather, the Bifurcation Authority applies ”.29 Although the Proposed Regulations give one example of a situation where bifurcation may be appropriate—if the IRS’s analysis indicates a “reasonable expectation that, as of the issuance of the [instrument], only a portion of the principal amount of” that instrument will be repaid—it is not clear how such an analysis would be conducted. Similarly, although the standard articulated in the Proposed Regulations refers to “general federal tax principles”, such bifurcation into debt and equity components has generally not been the approach taken by courts and the IRS,30 meaning that few inferences can be drawn from established judicial or administrative precedents. I. COMMENTS ON THE PROPOSED REGULATIONS The IRS and Treasury Department have requested comments on “all aspects” of the Proposed Regulations. Additionally, the preamble identifies the following as areas that are of particular interest to the IRS and Treasury Department: (i) which additional instruments should be subject to the Proposed Regulations; (ii) whether the Proposed Regulations should include special rules for cash pools and similar arrangements; (iii) the rule addressing deemed exchanges of an EGI and a debt instrument; (iv) the application of the Proposed Regulations to entities that are initially not U.S. persons or other entities required to file (or be reported on) a U.S. tax return, but which subsequently enter the U.S. tax system; (v) whether the Proposed Regulations should be broadened to cover indebtedness issued by “blocker” entities; (vi) whether guidance should be issued under the “foreign 29 See Prop. Treas. Reg. § 1.385-1(d). 30 There are, however, limited exceptions to this general rule. See, e.g., Farley Realty v. Comm’r, 279 F.2d 701 (2d Cir. 1960). -13- Related-Party Debt / Equity Regulations April 14, 2016 tax credit splitter” regulations to address “U.S. equity hybrid instruments” that arise under the Distributed Debt Rules; and (vii) the interaction of the Proposed Regulations (when applied to indebtedness issued by a “controlled partnership”) with the rules of the U.S. Internal Revenue Code that govern partnerships more generally. The deadline for submitting comments on the Proposed Regulations is July 7, 2016. By contrast to many proposed regulations issued by the IRS and Treasury Department, it should be noted that the preamble to the Proposed Regulations does not indicate that the IRS and Treasury Department view the Proposed Regulations as exempt from the “notice and comment” procedures of the Administrative Procedure Act. * * * Copyright © Sullivan & Cromwell LLP 2016 -14- Related-Party Debt / Equity Regulations April 14, 2016 LONDON:532731 Ronald E. Creamer Jr. +1-212-558-4665 creamerr@sullcroma@sullcrom.com David C. Spitzer +1-212-558-4376 spitzerd@sullcrom.com Davis J. Wang +1-212-558-3113 wang Michael Orchowski +44-20-7959-8504 orchowskim@sullcrom.com | http://www.lexology.com/library/detail.aspx?g=232244f7-c93d-4a13-8582-413dd2ae61a4 | CC-MAIN-2017-13 | en | refinedweb |
I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed.
Currently I'm doing an explicit
for
with open('SCC.txt') as data:
for line in data:
line = line.rstrip()
if line:
edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1])))
reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
I would keep your logic as it is the Pythonic approach just not split/rstrip the same line multiple times:
with open('SCC.txt') as data: for line in data: spl = line.split() if spl: i, j = map(int, spl) edge_list.append((i, j)) reversed_edge_list.append((j, i))
Calling rstrip when you have already called it is redundant in itself even more so when you are splitting as that would already remove the whitespace so splitting just once means you save doing a lot of unnecessary work.
You can also use csv.reader to read the data and filter empty rows once you have a single whitespace delimiting:
from csv import reader with open('SCC.txt') as data: edge_list, reversed_edge_list = [], [] for i, j in filter(None, reader(data, delimiter=" ")): i, j = int(i), int(j) edge_list.append((i, j)) reversed_edge_list.append((j, i))
Or if there are multiple whitespaces delimiting you can use
map(str.split, data):
for i, j in filter(None, map(str.split, data)): i, j = int(i), int(j)
Whatever you choose will be faster than going over the data twice or splitting the sames lines multiple times. | https://codedump.io/share/Go3IwTA0MdPG/1/build-2-lists-in-one-go-while-reading-from-file-pythonically | CC-MAIN-2017-13 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
The template
deref_iterator
is a type used as an attribute customization point. It is invoked by
the Karma repetitive generators (such as List
(
%), Kleene
(unary
*), Plus (unary
+), and Repeat)
in order to dereference an iterator pointing to an element of a container
holding the attributes to generate output from.
#include <boost/spirit/home/support/container.hpp>
Also, see Include Structure.
template <typename Iterator, typename Enable> struct deref_iterator { typedef <unspecified> type; static type call(Iterator& it); };
Notation
Iterator
An iterator type.
it
An instance of an iterator of type
Iterator.
C
A container type whose iterator type is
Iterator.
Spirit predefines specializations
of this customization point for several types. The following table lists
those types together with the types returned by the embedded typedef
type:
The customization point
deref_iterator
needs to be implemented for a specific iterator type whenever the container
this iterator belongs to is to be used as an attribute in place of a
STL container. It is applicable for generators (Spirit.Karma)
only. As a rule of thumb: it has to be implemented whenever a certain
iterator type belongs to a container which is to be passed as an attribute
to a generator normally exposing a STL container,
C
and if the container type does not expose the interface of a STL container
(i.e.
is_container<C>::type would normally return
mpl::false_).
If this customization point is implemented, the following other customization points might need to be implemented as well.
Here are the header files needed to make the example code below compile:
#include <boost/spirit/include/karma.hpp> #include <iostream> #include <vector>
The example (for the full source code please see here: customize_counter.cpp) uses the data structure
namespace client { struct counter { // expose the current value of the counter as our iterator typedef int iterator; // expose 'int' as the type of each generated element typedef int type; counter(int max_count) : counter_(0), max_count_(max_count) {} int counter_; int max_count_; }; }
as a direct attribute to the List
(
%) generator. This
type does not expose any of the interfaces of an STL container. It does
not even expose the usual semantics of a container. The presented customization
points build a counter instance which is incremented each time it is
accessed. The examples shows how to enable its use as an attribute to
Karma's repetitive generators.
In order to make this data structure compatible we need to specialize
a couple of attribute customization points:
traits::is_container,
traits::container_iterator,
traits::begin_container, and
traits::end_container. In addition,
we specialize one of the iterator related customization points as well:
traits::deref_iterator.
// All specializations of attribute customization points have to be placed into // the namespace boost::spirit::traits. // // Note that all templates below are specialized using the 'const' type. // This is necessary as all attributes in Karma are 'const'. namespace boost { namespace spirit { namespace traits { // The specialization of the template 'is_container<>' will tell the // library to treat the type 'client::counter' as a container providing // the items to generate output from. template <> struct is_container<client::counter const> : mpl::true_ {}; // The specialization of the template 'container_iterator<>' will be // invoked by the library to evaluate the iterator type to be used // for iterating the data elements in the container. template <> struct container_iterator<client::counter const> { typedef client::counter::iterator type; }; // The specialization of the templates 'begin_container<>' and // 'end_container<>' below will be used by the library to get the iterators // pointing to the begin and the end of the data to generate output from. // These specializations respectively return the initial and maximum // counter values. // // The passed argument refers to the attribute instance passed to the list // generator. template <> struct begin_container<client::counter const> { static client::counter::iterator call(client::counter const& c) { return c.counter_; } }; template <> struct end_container<client::counter const> { static client::counter::iterator call(client::counter const& c) { return c.max_count_; } }; }}}
// All specializations of attribute customization points have to be placed into // the namespace boost::spirit::traits. namespace boost { namespace spirit { namespace traits { // The specialization of the template 'deref_iterator<>' will be used to // dereference the iterator associated with our counter data structure. // Since we expose the current value as the iterator we just return the // current iterator as the return value. template <> struct deref_iterator<client::counter::iterator> { typedef client::counter::type type; static type call(client::counter::iterator const& it) { return it; } }; }}}
The last code snippet shows an example using an instance of the data
structure
client::counter to generate output from a
List (
%) generator:
// use the instance of a 'client::counter' instead of a STL vector client::counter count(4); std::cout << karma::format(karma::int_ % ", ", count) << std::endl; // prints: '0, 1, 2, 3'
As you can see, the specializations for the customization points as defined above enable the seamless integration of the custom data structure without having to modify the output format or the generator itself.
For other examples of how to use the customization point
deref_iterator please see here: use_as_container. | http://www.boost.org/doc/libs/1_48_0/libs/spirit/doc/html/spirit/advanced/customize/iterate/deref_iterator.html | CC-MAIN-2014-23 | en | refinedweb |
An extra period appears in an SMTP domain in a recipient policy
Topic Last Modified: 2009-02-11
The Microsoft Exchange Server Best Practices Analyzer examines the Simple Mail Transfer Protocol (SMTP) domains that appear in Exchange Server 2003 recipient policies.
If the Best Practices Analyzer detects a period (.) at the end of an SMTP domain name or two consecutive periods in an SMTP domain name, the tool generates one of the following warning messages, as appropriate:
A period appears after an SMTP domain name
Two consecutive periods appear in an SMTP domain name
If you have many SMTP domains listed in an Exchange 2003 e-mail address recipient policy, it may be difficult to locate a typographical error in the list of domains. However, if an SMTP domain name contains unintentional extra periods, Exchange cannot receive e-mail for the particular SMTP domain.
In Exchange 2003, e-mail address recipient policies perform two basic functions. They specify the messaging domains for which Exchange will accept mail, and they specify the e-mail addresses that are stamped on the particular user objects to which the policy applies.
Exchange 2007 separates the Exchange 2003 e-mail address recipient policy functionality into the following components:
- Accepted Domains
Exchange 2007 e-mail address policies define the e-mail proxy addresses that are stamped on recipient objects. Accepted domains define the SMTP namespaces for which an Exchange organization routes e-mail. When you configure an accepted domain policy, you can link it to an e-mail address policy so that Exchange will generate recipient e-mail addresses for the particular SMTP domain. Every e-mail address policy must be linked to an existing accepted domain. This is to make sure that Exchange Transport servers can correctly route e-mail messages that are sent to the e-mail addresses that are defined by the e-mail address policy.
To address this issue, correct any typographical errors that the Best Practices Analyzer finds.To modify an Exchange 2003 recipient policy
Start the Exchange System Manager tool.
Expand Recipients, and then click Recipient Policies.
In the details pane, right-click the recipient policy, and then click Properties.
Click the E-Mail Addresses (Policy) tab.
In the Generation rules list, click the e-mail address that you want to modify, and then click Edit.
In the Address box, remove any extra periods, as appropriate, and then click OK two times.
Right-click the policy, and then click Apply this policy now.
For more information about recipient policies, see Understanding Recipient Policies. | http://technet.microsoft.com/en-us/library/dd535372(v=exchg.80).aspx | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
Many a developer has found the dreaded GUI freeze problem simply by calling a method which takes a long time to execute. The reason for this is that the "long outstanding job" will execute on the same Thread as the GUI program. This stops all events from being posted and handled, thus locking up the User Interface and rendering the application unusable until the background job is complete! Not a very professional way to deliver a product indeed.
This site has many articles related to this topic, many of them are very good. Some are complicated, other require knowledge of the internals regarding the solution. This solution offers something a bit different. It takes an older technology, BackGroundWorker(s), and adds in the ICommand interface, a Delegate here, and an Event there, and we have a fully asynchronous and easy solution to this common issue. Perhaps the best part of this is that you can take this project and easily change it to suit your needs without knowing too many details.
BackGroundWorker
ICommand
Simply download the code and run it in Debug mode to see it work. This is the main screen:
The top row of buttons are the TabItems in the Main TabControl. You'll default to the first tab which has two buttons: "Start the Asynchronous Job" and "Stop the Asynchronous Job". Once you start the job, you will see a progress bar at the bottom begin to show the status of the work. Press the "Stop" button and the background work will stop.
TabItem
TabControl
To see the impact of the BackGroundWorker on the main GUI thread, just press any of the other tabs while the progress bar is working. You can browser the web, or you can view the code diagrams of how this was designed.
So what is the BaseCommand all about? It is a BackgroundWorker that implements the ICommand interface. Agile programming techniques teach us to abstract away the commonality. The BaseCommand class is an attempt to do that.
BaseCommand
BackgroundWorker
public class BaseCommand : BackgroundWorker, ICommand
{
public bool canexecute = true;
public event EventHandler CanExecuteChanged;
//------------------------------------------------------------------
public BaseCommand()
{
this.WorkerSupportsCancellation = true;
this.WorkerReportsProgress = true;
this.DoWork += new DoWorkEventHandler(BWDoWork);
this.ProgressChanged +=
new ProgressChangedEventHandler(BWProgressChanged);
this.RunWorkerCompleted +=
new RunWorkerCompletedEventHandler(BWRunWorkerCompleted);
}
//------------------------------------------------------------------
public virtual void BWRunWorkerCompleted(object sender,
RunWorkerCompletedEventArgs e)
{
}
//------------------------------------------------------------------
public virtual void BWProgressChanged(object sender,
ProgressChangedEventArgs e)
{
}
//------------------------------------------------------------------
public virtual void BWDoWork(object sender, DoWorkEventArgs e)
{
}
//------------------------------------------------------------------
public virtual bool CanExecute(object parameter)
{
return true;
}
//------------------------------------------------------------------
public virtual void Execute(object parameter)
{
}
}
As a BackgroundWorker, we find that some fundamental settings are set in the base class. This worker does support cancellation, it reports progress, and the events for doing the work. It pre-wires the ProgressChanged event handler as well as the RunWorkerCompleted event handler. Note however, that the methods themselves are virtual. This allows the concrete implementation to override these methods and implement what is desired.
ProgressChanged
RunWorkerCompleted
virtual
The ICommand interface is the virtual CanExecute method that defaults to true and the Execute method (also virtual), both meant to be overridden in the concrete class. One other small note, there is a var named (lowercase) canexecute. We'll discuss this a bit later.
CanExecute
true
Execute
canexecute
Inheriting the BaseCommand class as shown below, the concrete class first defines two delegates. One is for ProgressChanged and takes a parameter of type int that represents the progress (in percentage). The other delegate is the DataReady signature which, in this case, takes an ObservableCollection of type string. This is the preliminary set up for allowing any registered listeners to receive "feedback". Finally, there are two events built on those delegates which will be used for the "loosely coupled" communication in the View Model.
int
DataReady
ObservableCollection
string
public class AsynchronusCommand : BaseCommand
{
<summary>
/// This is the delegate definition to post progress back to caller via the
/// event named EHProgressChanged
///<summary>
///<param name="progress">
// Hold the progress integer value (from 0-100)</param>
//-----------------------------------------------------------------------
public delegate void DlgProgressChanged(int progress);
//-----------------------------------------------------------------------
/// <summary>
/// This is the delegate definition to post
/// a ObservableCollection back to the caller via the
/// event EHDataReady
/// </summary>
/// <param name="data">The signature
/// needed for the callback method</param>
public delegate void DlgDataReady(ObservableCollection<string> data);
//-----------------------------------------------------------------------
//Static event allows for wiring up to event before class is instanciated
/// <summary>
/// This is the Event others can subscribe to,
/// to get the post back of Progress Changed
/// </summary>
public static event DlgProgressChanged EHProgressChanged;
//-----------------------------------------------------------------------
//Static event to wire up to prior to class instanciation
/// <summary>
/// This is the event of which others can
/// </summary>
public static event DlgDataReady EHDataReady;
//-----------------------------------------------------------------------
/// <summary>
/// The Entry point for a WPF Command implementation
/// </summary>
/// <param name="parameter">Any parameter
/// passed in by the Commanding Architecture</param>
public override void Execute(object parameter)
{
if (parameter.ToString() == "CancelJob")
{
// This is a flag that the "other thread" sees and supports
this.CancelAsync();
return;
}
canexecute = false;
this.RunWorkerAsync(GetBackGroundWorkerHelper());
}
So now we arrive to the ICommand part of this class. This is the part of the class that will receive notification from a bound Command in WPF via the Execute method (shown above). WPF calls the Execute method via the XAML command binding in this demo. The first check in the method is to see if the parameter passed in was to Cancel the Job; which, is just an arbitrary string of "CancelJob". If so, this Thread is Canceled via the code shown, and control is returned to the application. If this is not a Cancellation, then the code invokes the asynchronous RunWorkerAsync method (the built-in support from the BackgroundWorker class). But wait a second, what's this GetBackGroundWorkerHelper() call all about?
RunWorkerAsync
GetBackGroundWorkerHelper()
I found out long ago that communicating to other threads was vastly simplified by creating a BackGroundWorkerHelper class. It is nothing more than a container for anything you want to pass in and out of another thread. In this sample below, we show two objects and a SleepIteration value being set. This is in the instantiation of the BackGroundWorkerHelper. The BackGroundWorkerHelper does have other variables in it we will see later.
BackGroundWorkerHelper
SleepIteration
//-----------------------------------------------------------------------
/// <summary>
/// A helper class that allow one to encapsulate everything
/// needed to pass into and out of the
/// Worker thread
/// </summary>
/// <returns>BackGroundWorkerHelper</returns>
public BGWH GetBackGroundWorkerHelper()
{
//The BGWH class can be anything one wants it to be..
//all of the work performed in background thread can be stored here
//additionally any cross thread communication can
//be passed into that background thread too.
BGWH bgwh = new BGWH(){obj1 = 1,
obj2 = 2,
SleepIteration = 200};
return bgwh;
}
So what does the BackGroundWorkerHelper class look like? For this demo we arbitrarily set it up as shown below. The objects don't do anything other than to show you that anything may be passed to the other thread. Notice that the var Data is the ObservableCollection that will be populated in the background thread.
Data
//////////////////////////////////////////////////////////////////
/// <summary>
/// Background worker Class allows you to pass in as many objects as desired,
/// Just change this class to suit your needs.
/// </summary>
public class BGWH
{
/// <summary>
/// This demo chose a Object for the first "thing" to be passed in an out.
/// </summary>
public object obj1;
/// <summary>
/// This is the second thing and shows another "object"
/// </summary>
public object obj2;
/// <summary>
/// An arbitrary integer value named SleepIteration
/// </summary>
public int SleepIteration;
/// <summary>
/// An observable collection
/// </summary>
public ObservableCollection<string> Data =
new ObservableCollection<string>();
}
Knowing that this BackGroundWorkerHelper is nothing more than a convenience class, it then becomes very simple to pass in complex data structures, DataTables, Lists, and the whole lot... Remember though that you don't want to pass in references to GUI objects because you will never be able to update them from these threads without some special code. Besides, this violates the rules of MVVM in that the ViewModel handles the content.
DataTable
List
So where is the "other thread"? The BackGroundWorker will spin off an asynchronous thread via the RunWorkerAsync method call discussed earlier. It will call the BWDoWork method which we hooked up via an EventHandler registration in the BaseCommand class. But notice here that we over-rode that method in the base class in the Concrete class, as shown below.
BWDoWork
Concrete
//-----------------------------------------------------------------------
/// <summary>
/// This is the implementation of the Asynchronous logic
/// </summary>
/// <param name="sender">Caller of this method</param>
/// <param name="e">The DoWorkEvent Arguments</param>
public override void BWDoWork(object sender,
System.ComponentModel.DoWorkEventArgs e)
{
//Ahh! Now we are running on a separate asynchronous thread
BGWH bgwh = e.Argument as BGWH;
//always good to put in validation
if (bgwh != null)
{
//we are able to simulate a long outstanding work item here
Simulate.Work(ref bgwh, this);
}
//All the work is done make sure to store result
e.Result = bgwh;
}
Notice also how easy it is to "unpack" the cargo. DoWorkEventArgs contains an argument which we know is a BackGroundWorkerHelper. How do we know? Because we passed it in via the GetBackGroundWorkerHelper() method call we discussed earlier. We unpack this helper class, and then check for null, and pass it in as a reference to the Simulate.Work method. All the Simulate.Work call does is to enter a loop, wait a bit, add data to the ObservableCollection in the BackGroundWorkerHelper class, and get this... It posts a Progress Event notification all the way back to the View via the ViewModel... Let's take a look.
DoWorkEventArgs
null
Simulate.Work
There's a loop that uses the BackGroundWorkerHelper's SleepIteration count to control how many times to loop, then there's a call to report progress via the TheAsynchCommand.ReportProgress call, as shown below.
TheAsynchCommand.ReportProgress
//-----------------------------------------------------------------------
public static void Work( ref Model.BGWH bgwh, BaseCommand TheAsynchCommand)
{
//shows how the BGWH can have many different control mechanisms
//note that we pass it in as a reference which means all updates here are
//automatically reflected to any object that has this reference.
int iteration = bgwh.SleepIteration;
//This is iterative value to determine total progress.
double perIteration = .005;
//simulate reading 200 records with a small delay in each..
Random r = new Random();
for (int i = 0; i < iteration + 1; i++)
{
System.Threading.Thread.Sleep(r.Next(250));
//Update the data element in the BackGroundWorkerHelper
bgwh.Data.Add("This would have been the " +
"data from the SQL Query etc. " + i);
//currentstate is used to report progress
var currentState = new ObservableCollection<string>();
currentState.Add("The Server is busy... Iteration " + i);
double fraction = (perIteration) * i;
double percent = fraction * 100;
//here a call is made to report the progress to the other thread
TheAsynchCommand.ReportProgress((int)percent, currentState);
//did the user want to cancel this job? If so get out.
if (TheAsynchCommand.CancellationPending == true)
{
// get out of dodge
break;
}
}
}
Note that at the bottom of the loop, there is a check for Cancelling the work. If true, we simply break out of this loop and exit the class. Because the BackGroundWorkerHelper helper was passed as a reference, the data content is already set! It is ready to be posted back to the ViewModel.
Remember those event handlers we wired up and over-rode? When the Simulate.Work class wanted to report progress, it simply fired the event to report progress. The event handler in the AsynchronousCommand class picked it up (running on the GUI thread) and then fired an event to tell the View Model. This is shown below via the EHProgressChanged(progress) signal.
AsynchronousCommand
EHProgressChanged(progress)
//-----------------------------------------------------------------------
/// BackGround Work Progress Changed, runs on this object's main thread
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
public override void BWProgressChanged(object sender,
System.ComponentModel.ProgressChangedEventArgs e)
{
//allow for a Synchronous update to the WPF Gui Layer
int progress = e.ProgressPercentage;
//notify others that the progress has increased.
EHProgressChanged(progress);
EHDataReady((ObservableCollection<string>)e.UserState);
}
The EHDataReady is also signaled in an attempt to update the data before it was complete. Tests are inconclusive on whether or not data was able to be displayed on a ProgressNotify operation as shown. More work needs to be done there. However, as you will see in this application, the progress bar runs perfect.
EHDataReady
ProgressNotify
One may ask, "How do we control the user from repeatedly kicking off the same command?" Remember that ubiquitous variable canexecute in the BaseCommand class? The method below which is called when the BackGroundThread is complete takes care of states.
BackGroundThread
/// <summary>
/// Handles the completion of the background worker thread.
/// This method is running on the current thread and
/// can be used to update the execute method with information as needed
/// </summary>
/// <param name="sender">The sender of this event</param>
/// <param name="e">The Run WorkerCompleted Event Args</param>
public override void BWRunWorkerCompleted(object sender,
System.ComponentModel.RunWorkerCompletedEventArgs e)
{
//ideally this method would fire an event
//to the view model to update the data
BGWH bgwh = e.Result as BGWH;
var data = bgwh.Data;
//notify others that the data has changed.
EHDataReady(data);
EHProgressChanged(0);
canexecute = true;
}
The Asynchronous Commands' BWRunWorkerCompeted event handler, which was overridden from the BaseCommand class (and was pre-wired to receive the event), receives the notification. It parses the BackGroundWorkerHelper, and then fires two events, the first is the Data event (handled in the ViewModel), the other is to reset the ProgressBar to zero, and finally, canexecute is set to true, which will allow this command to be invoked again (canexecute stops multiple triggering of the same command). Note; however, this can also be accomplished in the Execute method by checking to see if the thread is busy; if it is, the user should be notified via a message. The solution, as is, will not allow the user to press the button more than once, which is a nice approach.
BWRunWorkerCompeted
ProgressBar
Just how did the ViewModel handle these events? Looking at the first two lines of code in the ViewModel, we see that two EventHandlers were wired up to the static Model's AsynchronousCommand delegates. Remember way back at the top of the article were we defined these signatures?
//-----------------------------------------------------------------------
public MainWindowViewModel()
{
//the model will send a notification to us when the data is ready
Model.AsynchronusCommand.EHDataReady +=
new Model.AsynchronusCommand.DlgDataReady(
AsynchronusCommandEhDataReady);
//this is the event handler to update
//the current state of the background thread
Model.AsynchronusCommand.EHProgressChanged +=
new Model.AsynchronusCommand.DlgProgressChanged(
AsynchronusCommandEhProgressChanged);
And here are the methods in the ViewModel... Pretty simple really, as all they take is an integer for the progress in the first method, and an ObservableCollection of strings in the second.
///-----------------------------------------------------------------------
// The event handler for when there's new progress to report
void AsynchronusCommandEhProgressChanged(int progress)
{
Progress = progress;
}
//-----------------------------------------------------------------------
// The event handler for when data is ready to show to the end user
void AsynchronusCommandEhDataReady(ObservableCollection<string> data)
{
Data = data;
}
From there, they merely invoke the property setter as shown. WPF takes care of the rest, and we see something like this happen when all is said and done.
This article demonstrates an easy way to spin off another thread to do work, while the GUI thread remains unaffected. It shows how a BackGroundWorker and an ICommand interface can be wired up from XAML, using the MVVM pattern. The AsynchronusCommand (if you didn't already notice) is in the folder named Model, and not by accident. What we have here is a new way a Model can be implemented. It is "Loosely Coupled", and it runs in another thread. It even, without having a reference to the ViewModel, updates the View via the fired Events in the Simulate.Work class, which were handled by the ViewModel. Agile brought us to this point.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
IsBusy = true;
System.Threading.Tasks.Task.Factory.StartNew(() =>
{
return resultsFromTheSlowThingInSync();
}).ContinueWith(t =>
{
if (t.IsFaulted)
System.Diagnostics.Debug.WriteLine(t.Exception.ToString());
IsBusy = false;
IsShowingResults = true;
Results = t.Result.Select(br=> br.Report).Aggregate((a,b)=>a+b);
},
System.Threading.Tasks.TaskScheduler.FromCurrentSynchronizationContext());
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/123183/Asynchronus-MVVM-Stop-the-Dreaded-Dead-GUI-Problem?msg=3683393 | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
Users of C#, VB.NET and MC++ have a nice feature available :- delegates. The C++
language does not support this construct. But fortunately, there is a way to
implement rudimentary delegates using templates and some clever tricks borrowed
from the boost library.
I assume that you have a solid C++ background. In this article I will solve the
delegate problem using member function pointers and templates. You may want to
read up on these topics before you read any further.
For those who have not yet been acquainted with .NET-languages, here's a short
explanation.
Put simply, delegates are objects which allows calling methods on objects. Big
deal? It is a big deal, since these objects masquerade as free functions with
no coupling whatsoever to a specific object. As the name implies, it delegates
method calls to a target object.
Since it is possible to take the address of a member function, and apply that
member function on any object of the class which defined the member function,
it is logical that one should be able make a delegate construct. One way would
be to store the address of an object alongside with one of its member functions.
The storage could be an object which overloads operator(). The
type signature (return type and argument types) of operator() should
match the type signature of the member function which we are use for
delegation. A very non-dynamic version could be:
operator()
struct delegate {
type* obj;
// The object which we delegate the call to
int (type::* method)(int);
// Method belonging to type, taking an int
// and returning an int
delegate(type* obj_,
int (type::* method)(int)) : obj(obj_),
method(method_) { }
int operator()(int x) {
// See how this operator() matches
// the method type signature
return (obj->*method)(x);
// Make the call
}
};
The above solution is not dynamic in any way. It can only deal with objects of
type type, methods taking an int and returning an
int. This forces
us to either write a new delegate type for each object type/method combination
we wish to delegate for, or use object polymorphism where all classes derive
from type - and be satisfied with only being able to delegate
virtual methods defined in type which matches the int/int type
signature! Clearly, this is not a good solution.
type
int
int/int
Obviously we need to parameterize the object type, parameter type and return
type. The only way to do that in C++ is to use templates. A second attempt may
look like this:
template <typename Class, typename T1, typename Result>
struct delegate {
typedef Result (Class::* MethodType)(T1);
Class* obj;
MethodType method;
delegate(Class* obj_,
MethodType method_) : obj(obj_), method(method_) { }
Result operator()(T1 v1) {
return (obj->*method)(v1);
}
};
Much better! Now we can delegate any object and any method with one parameter
in that object. This is a clear improvement over the previous implementation.
Unfortunately, it is not possible to write the delegate so that it can handle any
number of arguments. To solve this problem for methods taking two parameters,
one has to write a new delegate which handles two parameters. To solve the
problem for methods taking three parameters, one has to write a new delegate
which handles three parameters - and so on. This is however not a big problem.
If you need to cover all your methods, you will most likely not need more than
ten such delegate templates. How many of your methods have more than ten
parameters? If they do, are you sure they should have more than ten? Also,
you'd only need to write these ten delegate once - the sweet power of
templates.
However, a small problem, besides the parameter problem, still remains. When
this delegate template is instantiated, the resulting delegate type will only
be able to handle delegations for the class you supplied as template parameter.
The delegate<A, int, int> type is different from delegate<B,
int, int>. They are similar in that they delegate method calls
taking an int and returning an int. They are
dissimilar in that they do not delegate for methods of the same class. .NET
delegates ignore this dissimilarity, and so should we!
delegate<A, int, int>
delegate<B,
int, int>
To remove this type dissimilarity, it is obvious that we need to remove the
class type as a template parameter. This is best accomplished by using object
polymorphism and template constructors. This is a technique which I've borrowed
from the boost template library. More
specifically, I borrowed it from the implementation of the any class in
that library.
any
Since I'm not a native English writer, I will not attempt to describe the final
code with words. I could try but I think I'd just make it more complex than it
is. Put simply, I use polymorphism and a templated constructor to gain an extra
level of indirection so that I can "peel" away the class information from the
delegate. Here's the code:
// The polymorphic base
template <typename T1, typename Result>
struct delegate_base { // Ref counting added 2002-09-22
int ref_count; // delegate_base's are refcounted
delegate_base() : ref_count(0) { }
void addref() {
++ref_count;
}
void release() {
if(--ref_count < 0)
delete this;
}
virtual ~delegate_base() { } // Added 2002-09-22
virtual Result operator()(T1 v1) = 0;
};
// The actual implementation of the delegate
template <typename Class, typename T1, typename Result>
struct delegate_impl : public delegate_base<T1, Result> {
typedef Result (Class::* MethodType)(T1);
Class* obj;
MethodType method;
delegate_impl(Class* obj_, MethodType method_) :
obj(obj_), method(method_) { }
Result operator()(T1 v1) {
return (obj->*method)(v1);
}
};
template <typename T1, typename Result>
struct delegate {
// Notice the type: delegate_base<T1,
// Result> - no Class in sight!
delegate_base<T1, Result>* pDelegateImpl;
// The templated constructor - The presence of Class
// does not "pollute" the class itself
template <typename Class, typename T1, typename Result>
delegate(Class* obj, Result (Class::* method)(T1))
: pDelegateImpl(new delegate_impl<Class,
T1, Result>(obj, method)) {
pDelegateImpl->addref(); // Added 2002-09-22
}
// Copy constructor and assignment operator
// added 2002-09-27
delegate(const delegate<T1,
Result>& other) {
pDelegateImpl = other.pDelegateImpl;
pDelegateImpl->addref();
}
delegate<T1, Result>& operator=(const delegate<T1,
Result>& other) {
pDelegateImpl->release();
pDelegateImpl = other.pDelegateImpl;
pDelegateImpl->addref();
return *this;
}
~delegate() { pDelegateImpl->release();
} // Added & modified 2002-09-22
// Forward the delegate to the delegate implementation
Result operator()(T1 v1) {
return (*pDelegateImpl)(v1);
}
};
There, .NET delegate requirements satisfied! For information on how
to actually use the delegates, see the demo source code available
for download at the top of this article.
Because I think delegates can be quite powerful, and I for one like to have
powerful tools in the toolbox. They might be useful some. | http://www.codeproject.com/Articles/2922/NET-like-Delegates-in-Unmanaged-C?fid=7643&df=90&mpp=10&sort=Position&spc=None&tid=670578 | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
Smart.
A Smart Pointer is a C++ object that acts like a pointer, but additionally deletes the object when it is no longer needed.
"No longer needed" is hard to define, since resource management in C++ is very complex. Different smart pointer implementations cover the most common scenarios. Of course, different tasks than just deleting the object can be implemented too, but these applications are beyond the scope of this tutorial.
Many libraries provide smart pointer implementations with different advantages and drawbacks. The samples here use the BOOST library, a high quality open source template library, with many submissions considered for inclusion in the next C++ standard.
Boost provides the following smart pointer implementations:
shared_ptr<T>
T"
shared_ptr
scoped_ptr<T>
intrusive_ptr<T>
T
weak_ptr<T>
shared_array<T>
scoped_array<T>
scoped_ptr
Let's start with the simplest one:
scoped_ptr is the simplest smart pointer provided by boost. It guarantees automatic deletion when the pointer goes out of scope..();
}
Using "normal" pointers, we must remember to delete it at every place we exit the function. This is especially tiresome (and easily forgotten) when using exceptions. The second example uses a scoped_ptr for the same task. It automatically deletes the pointer when the function returns 8 even in the case of an exception thrown, which isn't even covered in the "raw pointer" sample!)
The advantage is obvious: in a more complex function, it's easy to forget to delete an object. scoped_ptr does it for you. Also, when dereferencing a NULL pointer, you get an assertion in debug mode.
NULL
std::auto_ptr
Reference counting pointers track how many pointers are referring to an object, and when the last pointer to an object is destroyed, it deletes the object itself, too.
The "normal" reference counted pointer provided by boost is shared_ptr (the name indicates that multiple pointers can share the same object). Let's look at a few examples:
void Sample2_Shared()
{
// (A) create a new CSample instance with one reference
boost::shared_ptr<CSample> mySample(new CSample);
printf("The Sample now has %i references\n", mySample.use_count()); // should be 1
// (B) assign a second pointer to it:
boost::shared_ptr<CSample> mySample2 = mySample; // should be 2 refs by now
printf("The Sample now has %i references\n", mySample.use_count());
// (C) set the first pointer to NULL
mySample.reset();
printf("The Sample now has %i references\n", mySample2.use_count()); // 1
// the object allocated in (1) is deleted automatically
// when mySample2 goes out of scope
}
Line (A) creates a new CSample instance on the heap, and assigns the pointer to a shared_ptr, mySample. Things look like this:.
Note: If you never heard of PIMPL (a.k.a. handle/body) or RAII, grab a good C++ book - they are important concepts every C++ programmer should know. Smart pointers are just one way to implement them conveniently in certain cases - discussing them here would break the limits of this article.:
typedef boost::shared_ptr<CMyLargeClass> CMyLargeClassPtr;
std::vector<CMyLargeClassPtr> vec;
vec.push_back( CMyLargeClassPtr(new CMyLargeClass("bigString")) );
Very similar, but now, the elements get destroyed automatically when the vector is destroyed - unless, of course, there's another smart pointer still holding a reference. Let's have a look at sample 3:
void Sample3_Container()
{
typedef boost::shared_ptr<CSample> CSamplePtr;
// (A) create a container of CSample pointers:
std::vector<CSamplePtr> vec;
// (B) add three elements
vec.push_back(CSamplePtr(new CSample));
vec.push_back(CSamplePtr(new CSample));
vec.push_back(CSamplePtr(new CSample));
// (C) "keep" a pointer to the second:
CSamplePtr anElement = vec[1];
// (D) destroy the vector:
vec.clear();
// (E) the second element still exists
anElement->Use();
printf("done. cleanup is automatic\n");
// (F) anElement goes out of scope, deleting the last CSample instance
}
A few things can go wrong with smart pointers (most prominent is an invalid reference count, which deletes the object too early, or not at all). The boost implementation promotes safety, making all "potentially dangerous" operations explicit. So, with a few rules to remember, you are safe.
There are a few rules you should (or must) follow, though:
Rule 1: Assign and keep - Assign a newly constructed instance to a smart pointer immediately, and then keep it there. The smart pointer(s) now own the object, you must not delete it manually, nor can you take it away again. This helps to not accidentally delete an object that is still referenced by a smart pointer, or end up with an invalid reference count.
Rule 2: a _ptr<T> is not a T * - more correctly, there are no implicit conversions between a T * and a smart pointer to type T.
:
struct CDad;
struct CChild;
typedef boost::shared_ptr<CDad> CDadPtr;
typedef boost::shared_ptr<CChild> CChildPtr;
struct CDad : public CSample
{
CChildPtr myBoy;
};
struct CChild : public CSample
{
CDadPtr myDad;
};
// a "thing" that holds a smart pointer to another "thing":
CDadPtr parent(new CDadPtr);
CChildPtr child(new CChildPtr);
// deliberately create a circular reference:
parent->myBoy = child;
child->myDad = dad;
// resetting one ptr...
child.reset();
parent still references the CDad object, which itself references the CChild. The whole thing looks like this:
parent
CDad
CChild
If we now call dad.reset(), we lose all "contact" with the two objects. But this leaves both with exactly one reference, and the shared pointers see no reason to delete either of them! We have no access to them anymore, but they mutually keep themselves "alive". This is a memory leak at best; in the worst case, the objects hold even more critical resources that are not released:
Strong vs. Weak References:
A strong reference keeps the referenced object alive (i.e., as long as there is at least one strong reference to the object, it is not deleted). boost::shared_ptr acts as a strong reference. In contrast, a weak reference does not keep the object alive, it merely references it as long as it lives.
Note that a raw C++ pointer in this sense is a weak reference. However, if you have just the pointer, you have no ability to detect whether the object still lives.
boost::weak_ptr<T> is a smart pointer acting as weak reference. When you need it, you can request a strong (shared) pointer from it. (This can be NULL if the object was already deleted.) Of course, the strong pointer should be released immediately after use. In the above sample, we can decide to make one pointer weak:
boost::weak_ptr<T>
struct CBetterChild : public CSample
{
weak_ptr<CDad> myDad;
void BringBeer()
{
shared_ptr<CDad> strongDad = myDad.lock(); // request a strong pointer
if (strongDad) // is the object still alive?
strongDad->SetBeer();
// strongDad is released when it goes out of scope.
// the object retains the weak pointer
}
};
See the Sample 5 for more.
shared_ptr offers quite some services beyond a "normal" pointer. This has a little price: the size of a shared pointer is larger than a normal pointer, and for each object held in a shared pointer, there is a tracking object holding the reference count and the deleter. In most cases, this is negligible.
intrusive_ptr provides an interesting tradeoff: it provides the "lightest possible" reference counting pointer, if the object implements the reference count itself. This isn't so bad after all, when designing your own classes to work with smart pointers; it is easy to embed the reference count in the class itself, to get less memory footprint and better performance.
intrusive_ptr
To use a type T with intrusive_ptr, you need to define two functions: intrusive_ptr_add_ref and intrusive_ptr_release. The following sample shows how to do that for a custom class:
intrusive_ptr_add_ref
intrusive_ptr_release
#include "boost/intrusive_ptr.hpp"
// forward declarations
class CRefCounted;
namespace boost
{
void intrusive_ptr_add_ref(CRefCounted * p);
void intrusive_ptr_release(CRefCounted * p);
};
// My Class
class CRefCounted
{
private:
long references;
friend void ::boost::intrusive_ptr_add_ref(CRefCounted * p);
friend void ::boost::intrusive_ptr_release(CRefCounted * p);
public:
CRefCounted() : references(0) {} // initialize references to 0
};
// class specific addref/release implementation
// the two function overloads must be in the boost namespace on most compilers:
namespace boost
{
inline void intrusive_ptr_add_ref(CRefCounted * p)
{
// increment reference count of object *p
++(p->references);
}
inline void intrusive_ptr_release(CRefCounted * p)
{
// decrement reference count, and delete object when reference count reaches 0
if (--(p->references) == 0)
delete p;
}
} // namespace boost
This is the most simplistic (and not thread safe) implementation. However, this is such a common pattern, that it makes sense to provide a common base class for this task. Maybe another!).
There is a "little" problem with VC6 that makes using boost (and other libraries) a bit problematic out of the box.
The Windows header files define macros for min and max, and consequently, these respective functions are missing from the (original) STL implementation. Some Windows libraries such as MFC rely on min/max being present. Boost, however, expects min and max in the std:: namespace. To make things worse, there is no feasible min/max template that accepts different (implicitly convertible) argument types, but some libraries rely on that..
Please note: While I am happy about (almost) any feedback, please do not ask boost-specific questions here. Simply put, boost experts are unlikely to find your question here (and I'm just a boost noob). Of course, if you have questions, complaints, or recommendations regarding the article or the sample project, you. | http://www.codeproject.com/Articles/8394/Smart-Pointers-to-boost-your-code?fid=111912&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=935314&fr=42&PageFlow=FixedWidth | CC-MAIN-2014-23 | en | refinedweb |
Getting started
In this article, you will build a Web application using Grails, and deploy it on Apache Geronimo. Following is the software you need installed to follow along.
- Java Development Kit
- Both Grails and Geronimo require a Java Development Kit. Grails needs only Java V1.4, but Geronimo needs V1.5. That is because it is a Java EE V5-certified application server, and Java EE V5 uses Java V1.5 features such as annotations and generics. In this article, we use Java SE V1.6_05.
- Groovy programming language
- Grails uses the Groovy programming language, but this is included with Grails, so there are no additional downloads. Grails also uses many best-of-breed products, such as Hibernate and Spring, but these are also included with Grails.
- Grails
- This article uses Grails V1.0.2.
- Apache Geronimo
- This article uses Apache Geronimo V2.1.1. You can use either Geronimo with Tomcat or Jetty, but this article used the Jetty distribution.
- MySQL
- This article uses MySQL V5.0.41, but you should be able to use any database supported by Hibernate.
This article is not an introduction to Grails. You should already be familiar with Grails, though you could probably get by if you are familiar with Ruby on Rails and Java. The Resources section has some good articles to get you up to speed on Grails.
The Grails ad network
To demonstrate how you can use Grails and Geronimo together, you will build a simple application with Grails, then deploy and enhance it with Geronimo. For your application, you will build a simple ad network. Here is a quick breakdown of the use cases of the application:
- An advertiser can register on the network. All that's needed is a name and password.
- An advertiser can log in to the network.
- An advertiser can create an ad. The ad will have a title, text, and a URL for an image. It will also have a start and end date, a bid, and keywords. The keywords determine when the ad gets displayed, and the bid is used for priority when multiple ads have the same keyword.
- An affiliate can call a Web service to get a list of ads. The affiliate will provide a keyword and a maximum number of ads.
This is a pretty simple application. The Grails ad network is unlikely to supplant Google or Yahoo!, but it does let us touch many of the attractive features of Grails. Let's take a look at some of those features now and see how they enable us to rapidly develop the ad-network application using Grails.
Grails leverages the principles of convention over configuration and don't repeat yourself to greatly reduce the amount of code you need to write for a typical Web application. You will kick-start your development using some of the Grails code-generation scripts.
Creating an application
Let's call the application adserver. You will start by using the Grails command
generate-app. You will not spend too much time with how this command
works (see Resources if you are unfamiliar with it). What
is important is that your application is set up according to the Grails conventions.
Not only is it crucial for the Grails framework to know where to find classes and
configuration metadata but it will also make it much easier to package and deploy your
application. Figure 1 shows a snapshot of what this directory structure should look like.
Figure 1. The adserver application structure
Grails encourages a bottom-up development process, where you usually start by defining
your domain models. You can use the Grails command
create-domain-class to help with this. The first model you will
create is your
Advertiser model, as shown in Listing 1.
Listing 1. The
Advertiser class
class Advertiser { static hasMany = [ads:Ad] String name String password }
This class is pretty simple, as you would expect if you were familiar with Grails or
Groovy in general. The most complicated thing here is the
hasMany line. This indicates that an
Advertiser can have multiple
Ads. Let's
take a look at the
Ad class.
Listing 2. The
Ad class
class Ad { static belongsTo = [advertiser:Advertiser] String title String imageUrl String text String keywords Date startDate Date endDate Integer bid }
This class is also pretty simple, with the other end of the one-to-many relationship
between it and the
Advertiser class. You could now use the
Grails script
generate-all to create scaffolding code
(controller classes and GNU Server Pages (GSP) views for all CRUD actions) for each of these classes,
then run the application using the
run-app command. This is
optional, but especially useful if you are new to Grails. It gives you good examples of
how to use the domain classes, as well as how to write a controller and a view. We need
this knowledge to create more customized controllers and views for your application.
Customizing the application
The first thing you will want to do is let your users (advertisers) register. Create a
RegisterController. Its code is shown in Listing 3.
Listing 3. The
RegisterController
class RegisterController { def index = { } def save = { def exists = Advertiser.findWhere(name:params.name) if (exists){ flash.message = "The name " + params.name + " is already taken" redirect(action:index) } def advertiser = new Advertiser() advertiser.properties = params advertiser.save(flush:true) session.advertiser=advertiser redirect(controller:"ad",action:"adsFor") } }
The main thing this does is check to see if the name the advertiser picked has
already been taken. If it has, an error message is created and you redirect to the
index page. If the name is not taken, the advertiser is created. I put the
Advertiser instance in the session, so I do not have to look it up
again, and I redirect to the
Ad controller. You will look
at that class soon, but first you need to let the advertiser log in. The
Login controller is shown in Listing 4.
Listing 4.
Login controller
class LoginController { def index = { } def login = { def advertiser = Advertiser.findWhere( name:params.name , password:params.password ) if (!advertiser){ flash.message = "The password does not match the name of the advertiser" redirect(action:index) } session.advertiser=advertiser redirect(controller:"ad",action:"adsFor") } }
This is another simple Groovy class. It simply checks if the name and password
match up to the database. If it does, it sets the advertiser in the session and
forwards to the same action in the
Ad controller you first
saw in the
Register controller. Before moving on to the
Ad controller, you might have noticed that the controller
methods make direct use of domain-object data-access code. An alternative would be to use a service layer.
Grails service layer
The best practice here would be to factor out the
Advertiser.findWhere code seen in the
RegisterController and
LoginController to a service
layer. To do so, you can use the Grails command
create-service and refactor your code to move business logic to the
service class. For the register/login scenario shown above, it might look something like Listing 5.
Listing 5. Sample
AdvertiserService
class AdvertiserService { boolean transactional = true def advertiserExists(String name){ def exists = Advertiser.findWhere(name:name) exists != null } def login(String name, String password){ Advertiser.findWhere(name:name, password:password) } }
Remember, Grails is built on proven Java technology such as the Spring framework. A Grails service class becomes a Spring bean. Your service will be a singleton by default, just like it would be if you were using Spring directly. It can also be injected to other Spring beans, and all controllers happen to be Spring beans as well. Convention over configuration kicks in again here and makes it easy to reference the service in a controller.
Listing 6. Using a service in a controller
class LoginController { def advertiserSerivce def index = { } def login = { def advertiser = advertiserService.login(params.name, params,password) if (!advertiser){ flash.message = "The password does not match the name of the advertiser" redirect(action:index) } session.advertiser=advertiser redirect(controller:"ad",action:"adsFor") } }
By simply having a member variable following the xyzService convention, Grails knows to inject the xyzService singleton. In this sample, you will keep things simple and will not worry too much about a service layer. However, it is important to recognize this as a feature of Grails that really sets it apart from many other rapid-development/convention-over-configuration frameworks. Let's get back to the application code and look at how ads are created and served.
Creating and serving ads
So far, we have two controllers — one for registering and one for logging
into the ad network. Both controllers forward control to another controller: the
AdsController.
Listing 7. The
AdsController
import grails.converters.XML class AdController { def index = { redirect(action:list,params:params) } // the delete, save and update actions only accept POST requests def allowedMethods = [delete:'POST', save:'POST', update:'POST', placement:'GET'] def adsFor = { render(view:"list", model:[adList: Ad.findAllWhere(advertiser : session.advertiser )]); } def list = { if(!params.max) params.max = 10 [ adList: Ad.list( params ) ] } def placement = { def ads = Ad.findAll("from Ad as ad where ad.keywords like '%"+params.keyword+"%' order by ad.bid desc") render ads as XML } def show = { def ad = Ad.get( params.id ) if(!ad) { flash.message = "Ad not found with id ${params.id}" redirect(action:list) } else { return [ ad : ad ] } } def delete = { def ad = Ad.get( params.id ) if(ad) { ad.delete() flash.message = "Ad ${params.id} deleted" redirect(action:list) } else { flash.message = "Ad not found with id ${params.id}" redirect(action:list) } } def edit = { def ad = Ad.get( params.id ) if(!ad) { flash.message = "Ad not found with id ${params.id}" redirect(action:list) } else { return [ ad : ad ] } } def update = { def ad = Ad.get( params.id ) if(ad) { ad.properties = params if(!ad.hasErrors() && ad.save()) { flash.message = "Ad ${params.id} updated" redirect(action:show,id:ad.id) } else { render(view:'edit',model:[ad:ad]) } } else { flash.message = "Ad not found with id ${params.id}" redirect(action:edit,id:params.id) } } def create = { def ad = new Ad() ad.properties = params return ['ad':ad] } def save = { def ad = new Ad(params) if (session.advertiser){ ad.advertiser = session.advertiser } if(!ad.hasErrors() && ad.save()) { flash.message = "Ad ${ad.id} created" redirect(action:show,id:ad.id) } else { render(view:'create',model:[ad:ad]) } } }
If you have used the scaffolding command (
generate-all ad),
you will recognize much of this code. It has been customized in several ways though. You
added the
adsFor method, as this is what we have been
forwarding to from the register and login controllers. This uses the advertiser from
the session to get all the ads owned by the advertiser, then renders this with the
"list" view. The
save method has also been modified to use
the advertiser from the session. Finally, the placement method has been added. This is
the method that will be used by affiliates to retrieve ads. It takes a
keyword parameter and gets all the ads with that keyword. It uses a
custom query that uses the Hibernate Query Language, which is a REST Web service that
uses XML to serialize the data. Grails makes this easy by simply returning render ads
as XML. You have not had to add much code to customize your application to handle all
your use cases. You can quickly find out just how little code you have actually written
using the Grails
stats command.
Listing 8. Ad-server application stats
$ grails stats/Stats.groovy Environment set to development +----------------------+-------+-------+ | Name | Files | LOC | +----------------------+-------+-------+ | Controllers | 4 | 179 | | Domain Classes | 2 | 15 | | Services | 1 | 10 | | Integration Tests | 5 | 20 | +----------------------+-------+-------+ | Totals | 12 | 224 | +----------------------+-------+-------+
That is not a whole lot of code. Actually, most of it is code you generated using
the
generate-all scaffolding for
advertiser and
ad. Grails
makes it easy to develop by not making you write much code. It also makes it easy to develop by making it easy to test.
Testing the application
To run the application, simply issue the Grails command
run-app. This invokes a script that uses an embedded in-memory database and an
embedded Jetty Web container, so you do not need to set up a stand-alone server or
database. You should be able to immediately go to and access your
controllers, etc. Once you have tested and debugged your application, you are ready to start using it with Geronimo.
Leveraging Geronimo
So now you have developed a Web application that satisfies all of your use cases. You were able to develop it rapidly and had to write less code because you took advantage of Grails. Of course, less code is obviously a relative term — less code compared to what? The obvious answer is a typical Java Web application. There is no question that Grails lets you develop faster and write less code than a typical Java Web application, but is there a cost to this? The good news is that a Grails application is a Java Web application. Grails lets you package it up as a standard Java Web application, a WAR, and deploy it to any Java Web container — including Apache Geronimo. Let's take a look at how you deploy your Grails ad-server application to Geronimo.
Deploying a Grails WAR
The first step in deploying our Grails application to Geronimo is to create a WAR.
Looking at the Grails directory structure, it would not be too hard to write an Ant
script to do this, but luckily, Grails make it even easier by providing a simple Grails
command:
war. The script compiles code from the grails-app
tree and combines it with code from the base Grails ($GRAILS_HOME) directory.
The result is added to the /web-app directory. Since you are using Geronimo, you need
to add a Geronimo deployment plan. You can simply create a geronimo-web.xml file in /web-app/WEB-INF.
Listing 9.> </environment> <context-root>/adserver</context-root> </web-app>
There is one very important thing to notice here, and that is the hidden-classes section. These are packages that are included by default with Geronimo, but are also included with Grails. This tells the class loader that will load our Grails app, to ignore any classes from these packages available to the parent class loader (i.e., the container's class loader). This will guarantee that the Grails versions of these classes are loaded and we will not have any nasty class-loader conflicts.
Now you are ready to run the Grails
war command. Listing 10
shows the command being run and the output of the command.
Listing 10. Using the
war command
$ grails war/War.groovy Environment set to production [delete] Deleting: /Users/michael/.grails/1.0.2/projects/ adserver/resources/web.xml [delete] Deleting directory /Users/michael/.grails/1.0.2/projects/adserver/classes [delete] Deleting directory /Users/michael/.grails/1.0.2/projects/adserver/resources [mkdir] Created dir: /Users/michael/.grails/1.0.2/projects/adserver/classes [groovyc] Compiling 13 source files to /Users/michael/.grails/1.0.2/ projects/adserver/classes [mkdir] Created dir: /Users/michael/.grails/1.0.2/projects/adserver/ resources/grails-app/i18n [native2ascii] Converting 10 files from /Users/michael/code/grails/adserver/ grails-app/i18n to /Users/michael/.grails/1.0.2/projects/adserver/resources/ grails-app/i18n [copy] Copying 1 file to /Users/michael/.grails/1.0.2/projects/adserver/classes [copy] Copying 1 file to /Users/michael/.grails/1.0.2/projects/adserver/resources [mkdir] Created dir: /Users/michael/code/grails/adserver/staging [copy] Copying 93 files to /Users/michael/code/grails/adserver/staging [copy] Copied 19 empty directories to 1 empty directory under /Users/michael/ code/grails/adserver/staging [copy] Copying 23 files to /Users/michael/code/grails/adserver/staging/ WEB-INF/grails-app [copy] Copying 55 files to /Users/michael/code/grails/adserver/staging/ WEB-INF/classes [mkdir] Created dir: /Users/michael/code/grails/adserver/staging/ WEB-INF/spring [copy] Copying 1 file to /Users/michael/code/grails/adserver/staging/ WEB-INF/classes [mkdir] Created dir: /Users/michael/code/grails/adserver/staging/ WEB-INF/templates/scaffolding [copy] Copying 6 files to /Users/michael/code/grails/adserver/staging/ WEB-INF/templates/scaffolding [copy] Copying 50 files to /Users/michael/code/grails/adserver/staging/ WEB-INF/lib [copy] Copying 1 file to /Users/michael/code/grails/adserver/staging/ WEB-INF [delete] Deleting: /Users/michael/.grails/1.0.2/projects/adserver/resources/web.xml [copy] Warning: /Users/michael/code/grails/adserver/plugins not found. [propertyfile] Updating property file: /Users/michael/code/grails/adserver/staging/ WEB-INF/classes/application.properties [mkdir] Created dir: /Users/michael/code/grails/adserver/staging/WEB-INF/plugins [copy] Warning: /Users/michael/code/grails/adserver/plugins not found. [jar] Building jar: /Users/michael/code/grails/adserver/adserver-0.1.war [delete] Deleting directory /Users/michael/code/grails/adserver/staging Done creating WAR /Users/michael/code/grails/adserver/adserver-0.1.war
To deploy the WAR, you can simply add it to the Geronimo deployment directory ($GERONIMO_HOME/deploy, where $GERONIMO_HOME is the location of your Geronimo installation) or you can use the Geronimo Console.
Figure 2. Using the Geronimo console to deploy Grails WAR
In the Geronimo deployment plan in Listing 9, we set the context path of the application to /adserver, so will bring up our deployed application. Now that our Grails application is running on Geronimo, let's look at how Geronimo can be used to enhance the application.
Creating a database pool
It is common knowledge that you can gain a lot of performance benefits by using database connection pools. You save the significant overhead of creating a database connection on every request to your Web application. Thus, it is very common to set up a database connection pool in Java Web applications. An application server like Geronimo makes it even easier to create such a pool that can be reused by any application deployed on Geronimo. You can create a pool easily from the Geronimo console, as shown in Figure 3.
Figure 3. Creating a database pool in Geronimo
Now that the database pool has been created, we just need to access it from our Grails application. There are a couple of steps we need to do for this.
Accessing a database pool from Grails
The database pool is represented as a Java
DataSource object
and it is bound in JNDI, so any application can use it. For this to exist in the context
of a Web application, it needs to be referenced in our web.xml file. Wait a minute
— what web.xml? Grails creates one for us that it bases off of
$GRAILS_HOME/conf/webdefault.xml. You could edit that, or you could generate the WAR,
unzip it, and edit the generated one. Either way, you will need to add the section shown in Listing 11 to it.
Listing 11. Referencing a
DataSource in web.xml
<resource-ref> <res-ref-name>jdbc/MyDataSource</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Shareable</res-sharing-scope> </resource-ref>
This can usually be added just before the end of the web.xml file. Next, we need to add a reference to our Geronimo deployment plan.
Listing 12. New> <dependencies> <dependency> <groupId>console.dbpool</groupId> <artifactId>adserver</artifactId> </dependency> </dependencies> </environment> <context-root>/adserver</context-root> <resource-ref> <ref-name>jdbc/MyDataSource</ref-name> <resource-link>adserver</resource-link> </resource-ref> </web-app>
Note that we called our database pool
adserver and that is
the
resource-link you see in the
resource-ref and the dependency. The
ref-name in the deployment plan must match the
res-ref-name in the web.xml snippet in Listing 11. This connects the
dots between Geronimo and your Web application.
Now the connection pool is available to the application. So how do you access it? Turns out that Grails makes this part easy. All we need to do is edit /grails-app/conf/DataSource.groovy.
Listing 13. DataSource.groovy
dataSource { jndiName = "java:comp/env/jdbc/MyDataSource" } hibernate { cache.use_second_level_cache=true cache.use_query_cache=true cache.provider_class='org.hibernate.cache.EhCacheProvider' } // environment specific settings environments { development { dataSource { dbCreate = "update" } } test { dataSource { dbCreate = "create-drop" } } production { dataSource { dbCreate = "create" } } }
Normally, the first
dataSource block is where we define JDBC
properties, like driver class, username, password, etc. Now we simply provide the JNDI
name. This name must be "java:comp/env/" plus whatever you put for the
res-ref-name in the web.xml file. You can override all of these
settings for any of your environments, just like you normally would.
Summary
You have seen how Grails makes it possible to rapidly build a Web application. Its use of convention over configuration makes a lot of popular technologies fuse together with no effort. You follow the conventions and Grails makes your life easy. You still have to write code for your business logic, but even that code is minimal, courtesy of the expressiveness of the Groovy programming language. Grails with Geronimo just gives this happy story an even happier ending. You are able to leverage all the power of Geronimo as our Grails application runs just like any other Java Web application and can access Geronimo resources in a similar fashion. From here, you can make your Grails application more sophisticated by using more Geronimo features like messaging, or you could make it more scalable by deploying it to a Geronimo cluster.
Download
Resources
Learn
- "Mastering Grails: Build your first Grails application" is necessary if you're new to Grails.
- Read "Mastering Grails: GORM: Funny name, serious technology" to learn all about how Grails builds on top of Hibernate to interface with your database.
- "Mastering Grails: Changing the view with Groovy Server Pages": Don't like the UI of this app? Learn all the ways to change it.
- Are you a fan of the succinctness that Groovy provides? Learn all about its concise syntax by reading "Practically Groovy: Reduce code noise with Groovy."
- Groovy is another promising language that compiles to Java bytecode. Read about creating XML with it in the developerWorks article "Practically Groovy: Mark it up with Groovy Builders."
- Grails.org is the best place for Grails information.
- The Grails manual is every Grails developer's best friend.
- Take a look at "Grails vs Rails Performance Benchmarking" to see how Grails has significant advantages over Rails by being on top of Java, Hibernate, and Spring.
- Deploying an application on Geronimo is easy, but there is a lot that goes on. Learn all about it in the developerWorks article "Understand Geronimo's deployment architecture."
- Read "Remotely deploy Web applications on Apache Geronimo" to find out how to deploy your Grails applications to remote instances of Geronimo.
- Learn about other alternative languages running on the JVM in the developerWorks article "Invoke dynamic languages dynamically, Part 1."
- Read the latest Geronimo documentation and news on the Geronimo wiki.
- Get involved in the Geronimo project.
- Join the Apache Geronimo mailing list.
- Understand what you need to do to apply the Apache License V2.
- Stay current with developerWorks technical events and webcasts.
- Browse all the Apache articles and no-cost Apache tutorials available in the developerWorks Open source zone.
- To listen to interesting interviews and discussions for software developers, check out developerWorks podcasts.
- Watch and learn about IBM and open source technologies and product functions with the no-cost developerWorks On demand demos.
Get products and technologies
- Download the Java SDK. This article uses Java SE V1.6_05.
- Download Grails V1.02.
- This article uses Geronimo V2.1.1.
- Download MySQL V5.0.41.
- Download the latest version of Apache Geronimo.
- Download your no-cost copy of IBM WebSphere® Application Server Community Edition— a lightweight J2EE application server built on Apache Geronimo open source technology that is designed to help you accelerate your development and deployment efforts.
-. | http://www.ibm.com/developerworks/library/os-ag-grails/index.html | CC-MAIN-2014-23 | en | refinedweb |
06 May 2013 22:27 [Source: ICIS news]
HOUSTON (ICIS) –?xml:namespace>
Total March US exports of nylon were 62,042 tonnes, up from 56,120 tonnes a year ago and 56,666 tonnes in February 2013.
Exports to
March imports of nylon rose by 10%, from 7,294 a year ago to 8,028 in March 2013. Shipments from
ICIS provides pricing reports for nylon 6 and nylon 6,6.
US producers of nylon include Ascend Performance Materials, BASF, DuPont, EMS-Grivory, Honeywell, INVISTA, NYCOA and Sh | http://www.icis.com/Articles/2013/05/06/9665385/us-march-nylon-exports-rose-by-11-compared-with-year-ago--itc.html | CC-MAIN-2014-23 | en | refinedweb |
This is your resource to discuss support topics with your peers, and learn from each other.
12-10-2012 02:47 PM
Cannot get InvokeActionItem to recognized and/or executed
Copied code ffrom documentation (2 places, more or less identical) and added to my actions. My ActionItems still appeared in menu and executed but the InvokeActionItem entries did not appear in menu and when I forced them to the ActionBar I got the error below at execution time
QML Code
at end of page definition
actions: [
ActionItem {
title: qsTr("Sound")
ActionBar.placement: ActionBarPlacement.OnBar
onTriggered: {
myClass.displaySettingsPage("sound");
}
},
ActionItem {
title: qsTr("BBM: View and Update Profile")
ActionBar.placement: ActionBarPlacement.OnBar
onTriggered: {
myClass.displayProfilePage(); //from example
}
},
ActionBar.placement: ActionBarPlacement.OnBar
ActionBar.placement: ActionBarPlacement.OnBar
Here is error message I get
InvocationWrapper:
nQueryFinished: menu service population failed
query mimeType=""
query uri=QUrl("")
query data= ""
query action= "bb.action.BBMCHAT"
query target= ""
query perimeter= 0
MenuManager.error()= 1
MenuManager.isFinished()= true
InvocationWrapper:
nQueryFinished: no matching result from Menu Service for query
mimeType="image/png"
uri=QUrl("")
data= ""
perimeter= 0
action= "bb.action.OPEN"
target= ""
InvocationWrapper:
nQueryFinished: menu service population failed
query mimeType=""
query uri=QUrl("")
query data= ""
query action= "bb.action.INVITEBBM"
query target= ""
query perimeter= 0
MenuManager.error()= 1
MenuManager.isFinished()= true
Solved! Go to Solution.
12-10-2012 04:11 PM
Your "Start BBM Chat" and "Invite to BBM" InvokeActionItems require a valid URI. If it's missing they won't appear. In the case for chat, the URI has to be a PIN of a BBM contact.
12-10-2012 04:29 PM
Yes, but the QML parser doesn't know that a URI is needed.
It explains why my C++ call failed when I tried it without a URI too but not why the action was not added to the Menu.
12-10-2012 04:31 PM
Not all InvokeActionItems require a URI, so the QML parser isn't displaying an error. But those two action IDs do.
12-10-2012 04:53 PM
The example in the documentation doesn't make sense then.
No one would put a hard coded invite into a menu, 'a menu item to invite only one person'. Is it the pin of the "inviter" or the pin of the "invitee". I thought it was the "invitee" expecting a display of a page into which the pin or some other way of identifying the person to be invited and then the invite is sent.
Or, is it not a menu item at all and a simple invoke with a pin already selected from another page. If this is true why do both examples that I have seen show InvokeActionItem as being shown with other menu items. And, if they are not menu items how do you invoke it - where in the user interface does this occur. If it were a slot it could be signalled but I don't see that in the examples.
12-10-2012 05:05 PM
What are you trying to do, dynamically choose a contact from within your application or do you want the BBM contact picker to appear, allowing the user to choose a contact (both are possible)?
12-10-2012 05:27 PM
Go to your slide from your presentation last week "Invoking BBM"
On that slide on the right is what looks like a menu on which there are 5 entries.
Is that a menu or is is not.?
Then, for each menu you have a slide.
For BBM Chat you have
Start a BBM chat
InvokeActionItem {
title: "Start BBM Chat"
query {
blah blah blah
}
}
The docuiumentation and samples show the InvokeActionItems in a list of Actions implying that this is what creates the Menu item and when slected the page on the righT on this slide will appear.
If that is not the case then what do these two slides actually mean?
So I followed the documentation example and added the InvokeActionItems to my action list (inferred by your slides) but they do not appear in the menu. ActionItems do but not InvokeActionItems.
If the documentation is not correct than how is a InvokeActionItem included in and invoked from QML.
12-11-2012 09:21 AM
You are correct. Here is the QML that I used to create the screenshots in the presentation. Note that there is a bug in the current public OS that can cause some "share over NFC" type menu items to appear. What you see in my presentation is what should happen and what'll be shown in the next OS update.
The onTriggered methods used here are optional. I just list them for demonstration. There is also a bug in the current OS that prevents updating the URI in the onTriggered method. This will also be fixed in the next OS release.
import bb.cascades 1.0 Page { content: Container { Label { id: myLabel text: "BBM Invocation Demo" } } actions: [ InvokeActionItem { title: "Start BBM Chat" ActionBar.placement: ActionBarPlacement.OnBar query { invokeActionId: "bb.action.BBMCHAT" uri: "pin:2100000A" //<- This has to be a PIN in your BBM contacts or this won't appear in the menu. } onTriggered: { uri = "pin:2100000B" //<- This also has to be a PIN in your BBM contacts, or else PIN above would be used. } }, InvokeActionItem { title: "Set BBM Avatar Pic" query { invokeTargetId: "sys.bbm.imagehandler" invokeActionId: "bb.action.SET" uri: "" } }, InvokeActionItem { title: "Invite to BBM" query { invokeActionId: "bb.action.INVITEBBM" uri: "pin:2100000A" } }, InvokeActionItem { title: "Share Text Over BBM" query { mimeType: "text/plain" invokeTargetId: "sys.bbm.sharehandler" invokeActionId: "bb.action.SHARE" data: "This is some text to share." } onTriggered: { data = "Some new text" } }, InvokeActionItem { title: "Share Image Over BBM" query { invokeTargetId: "sys.bbm.sharehandler" invokeActionId: "bb.action.SHARE" uri: "" } } ] }
12-11-2012 07:46 PM
Recompiled with Gold version and InvokeActionItems now appear in menu. They are all grouped after action items on menu no matter where they are placed in QML.
Now just have to figure out 1) why I can no longer register app with BBM in Gold and 2) how to dynamically change the uri's seeing as there is no prompt
12-12-2012 09:31 AM
Glad it's working for you now.
Refer to this link for the registration issue.
BlackBerry Messenger Social Platform Registration Changes
As for changing the URI, have a look at the onTriggered method I have above. You can change the URI there, pointing it to a variable or control in your application. | http://supportforums.blackberry.com/t5/Native-Development/InvokeActionItem-not-recognized-does-not-appear-in-menu/td-p/2028615/highlight/true | CC-MAIN-2014-23 | en | refinedweb |
Perl
From FedoraProject
Revision as of 16:58, 18 February)
Fedora perl infrastructure
Perl to CPAN Mapping
With most perl modules being in CPAN, preliminary "mapping" table has been created. This table is regenerated on a daily basis, and will be included in the upcoming Perl SIG Infrastructure hosted project .
Problems that need to be addressed
The following topics need to be discussed/improved/corrected. We need to start discussing them in the fedora-perl-list.
- Improve the RPM perl scripts (requirements and provides detection)
-?
- Try to have RPM patched in order to create the debuginfo files after the %check section script is executed and not before (right now the files are created after the %install check script is executed). This breaks the signature tests (there are also other problems related to the signature tests in the building environment: network access to import pgg keys, where should they be stored, ...). More information available here and in bug #167252 .
- How to update core perl modules?
- The correct @INC directories are still questionable.
- Have a common dir for noarch modules instead of one for each perl version supported.
- The magic that is perl(:WITH_xxx) needs to be better documented and explained, so packagers -- and reviewers! -- know:
- What they are and what they mean
- When to use them
- When to _not_ use them
- Common things to check for that would indicate their usage
- explore possibilities or merging RPM2 functionality into RPM namespace #671389
Miscellaneous
- Clarified packaging guidelines - Some of the packaging guidelines (see Packaging/Perl) have conflicted with some common practises.
- For example, BuildRequires: perl was common but forbidden; that has now been changed. One current issue is the prohibition against including header files in the main package; some perl modules include these deep in the module directory hierarchy, and moving them to a separate -devel package is pointless.
- Notes about "Makefile.PL vs Build.PL" or "ExtUtils::MakeMaker vs Module::Build"
Packagers/Reviewers/People interested in helping
- ChrisWeyl
- Emmanuel Seyman
- GavinHenry
- Iain Arnell
- JasonTibbitts
- Jitka Plesníková
- GavinHenry
- Gerd Pokorra
- Lubomir Rintel
- Marcela Mašláňová
- PaulHowarth
- Petr Písař
- Petr Šabata
- Rüdiger Landmann
- GavinHenry
- StevenPritchard[?] | https://fedoraproject.org/w/index.php?title=Perl&diff=323890&oldid=77602 | CC-MAIN-2014-23 | en | refinedweb |
CGI::Application::Plugin::Session - Plugin that adds session support to CGI::Application
version 1.05
use CGI::Application::Plugin::Session; my $language = $self->session->param('language');
CGI::Application::Plugin::Session seamlessly adds session support to your CGI::Application modules by providing a CGI::Session object that is accessible from anywhere in the application.
Lazy loading is used to prevent expensive file system or database calls from being made if the session is not needed during this request. In other words, the Session object is not created until it is actually needed. Also, the Session object will act as a singleton by always returning the same Session object for the duration of the request.
This module aims to be as simple and non obtrusive as possible. By not requiring any changes to the inheritance tree of your modules, it can be easily added to existing applications. Think of it as a plugin module that adds a couple of new methods directly into the CGI::Application namespace simply by loading the module.
CGI::Application::Plugin::Session - Add CGI::Session support to CGI::Application
This method will return the current CGI::Session object. The CGI::Session object is created on the first call to this method, and any subsequent calls will return the same object. This effectively creates a singleton session object for the duration of the request. CGI::Session will look for a cookie or param containing the session ID, and create a new session if none is found. If
session_config has not been called before the first call to
session, then it will choose some sane defaults to create the session object.
# retrieve the session object my $session = $self->session; - or - # use the session object directly my $language = $self->session->param('language');
This method can be used to customize the functionality of the CGI::Application::Plugin::Session module. Calling this method does not mean that a new session object will be immediately created. The session object will not be created until the first call to $self->session. This 'lazy loading' can prevent expensive file system or database calls from being made if the session is not needed during this request.
The recommended place to call
session_config is in the
cgiapp_init stage of CGI::Application. If this method is called after the session object has already been accessed, then it will die with an error message.
If this method is not called at all then a reasonable set of defaults will be used (the exact default values are defined below).
The following parameters are accepted:
This allows you to customize how the CGI::Session object is created by providing a list of options that will be passed to the CGI::Session constructor. Please see the documentation for CGI::Session for the exact syntax of the parameters.
CGI::Session Allows you to set an expiry time for the session. You can set the DEFAULT_EXPIRY option to have a default expiry time set for all newly created sessions. It takes the same format as the $session->expiry method of CGI::Session takes. Note that it is only set for new session, not when a session is reloaded from the store.
This allows you to customize the options that are used when creating the session cookie. For example you could provide an expiry time for the cookie by passing -expiry => '+24h'. The -name and -value parameters for the cookie will be added automatically unless you specifically override them by providing -name and/or -value parameters. See the CGI::Cookie docs for the exact syntax of the parameters.
NOTE:.
If set to a true value, the module will automatically add a cookie header to the outgoing headers if a new session is created (Since the session module is lazy loaded, this will only happen if you make a call to $self->session at some point to create the session object). This option defaults to true. If it is set to false, then no session cookies will be sent, which may be useful if you prefer URL based sessions (it is up to you to pass the session ID in this case).
The following example shows what options are set by default (ie this is what you would get if you do not call session_config).
$self->session_config( CGI_SESSION_OPTIONS => [ "driver:File", $self->query, {Directory=>'/tmp'} ], COOKIE_PARAMS => { -path => '/', }, SEND_COOKIE => 1, );
Here is a more customized example that uses the PostgreSQL driver and sets an expiry and domain on the cookie.
$self->session_config( CGI_SESSION_OPTIONS => [ "driver:PostgreSQL;serializer:Storable", $self->query, {Handle=>$dbh} ], COOKIE_PARAMS => { -domain => 'mydomain.com', -expires => '+24h', -path => '/', -secure => 1, }, );
This method will add a cookie to the outgoing headers containing the session ID that was assigned by the CGI::Session module.
This method is called automatically the first time $self->session is accessed if SEND_COOKIE was set true, which is the default, so it will most likely never need to be called manually.
NOTE that if you do choose to call it manually that a session object will automatically be created if it doesn't already exist. This removes the lazy loading benefits of the plugin where a session is only created/loaded when it is required.
It could be useful if you want to force the cookie header to be sent out even if the session is not used on this request, or if you want to manage the headers yourself by turning SEND_COOKIE to false.
# Force the cookie header to be sent including some # custom cookie parameters $self->session_cookie(-secure => 1, -expires => '+1w');
This method will let you know if the session object has been loaded yet. In other words, it lets you know if $self->session has been called.
sub cgiapp_postrun { my $self = shift; $self->session->flush if $self->session_loaded;; }
This? }
In a CGI::Application module:
# configure the session once during the init stage sub cgiapp_init { my $self = shift; # Configure the session $self->session_config( CGI_SESSION_OPTIONS => [ "driver:PostgreSQL;serializer:Storable", $self->query, {Handle=>$self->dbh} ], DEFAULT_EXPIRY => '+1w', COOKIE_PARAMS => { -expires => '+24h', -path => '/', }, SEND_COOKIE => 1, ); } sub cgiapp_prerun { my $self = shift; # Redirect to login, if necessary unless ( $self->session->param('~logged-in') ) { $self->prerun_mode('login'); } } sub my_runmode { my $self = shift; # Load the template my $template = $self->load_tmpl('my_runmode.tmpl'); # Add all the session parameters to the template $template->param($self->session->param_hashref()); # return the template output return $template->output; }
CGI::Application, CGI::Session, perl(1)
Cees Hek <ceeshek@gmail.com>
This library is free software. You can modify and or distribute it under the same terms as Perl itself.
Cees Hek <ceeshek@gmail.com>
This software is copyright (c) 2013 by Cees Hek.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | https://metacpan.org/pod/release/FREW/CGI-Application-Plugin-Session-1.05/lib/CGI/Application/Plugin/Session.pm | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
In this article, I shall describe one of the methods that can be used to transform C/C++ code into C# code with the least amount of effort. The principles laid out in this article are also suitable for other pairs of languages, though. I want to warn you straight-off that this method is not applicable to porting of any GUI-related code.
What is this useful for? For example, I have used this method to port libtiff, the well-known TIFF library, to C# (and libjpeg too). This allowed me to reuse work of many people contributed to libtiff along with the .NET Framework Class Library in my program. Code examples in my article are taken mainly from libtiff / libjpeg libraries.
What you will need:
The "one-click" build and test runs requirement is there to speed up the "change - compile - run tests" cycle as much as possible. The more time and effort goes into each such cycle, the fewer times it will be executed. This may lead to massive and complex roll-backs of erroneous changes.
You can use any version control system. I personally use Subversion - you may pick up whatever you're comfortable with. Anything instead of a set of folders on the hard disk will do.
Tests are required to make sure that the code still retains all of its features at any given time. Being safe in the knowledge that no functional changes are introduced into the code is what sets my method apart from the "let's rewrite it from scratch in the new language" approach. Tests are not required to cover a 100% of the code, but it's desirable to have the tests for all the main features of the code. The tests shouldn't be accessing the internals of the code to avoid constant rewriting of them.
Here's what I used to port LibTiff:
To grasp refactoring concepts, you only need to read one book. Martin Fowler's Refactoring: Improving the Design of Existing Code. Be sure to read it if you still haven't. Any programmer can only gain by knowing refactoring principles. You don't have to read the entire book, first 130 pages from the beginning is enough. This is the first five chapters and the beginning of the sixth, up to the "Inline Method".
It goes without saying, the better you know the languages that are being used in your source and destination code, the easier the transformation will go. Please note that a deep knowledge of the internals of the original code is not required when you begin. It's enough to understand what the original code does, a deeper understanding of how it does it will come in the process.
The essence of the method is that the original code is simplified through a series of simple and small refactorings. You shouldn't attempt to change a large chunk of code and try to optimize it all at once. You should progress in small steps, run tests after every change cycle and make sure to save every successful modification. That is, make a small change - test it. If all is well, save the change in the VCS repository.
Transfer process could be broken down into 3 big stages:
Only after completing these stages, you should look at the speed and the beauty of the code.
The first stage is the most complex. The goal is to refactor C/C++ code into "pure C++" code with syntax that is as close to C# syntax as possible. This stage means getting rid of:
Let us go over these steps in detail.
First of all, we should get rid of the unused code. For instance, in the case of libtiff, I removed the files that were not used to build Windows version of the library. Then, I found all the conditional compilation directives ignored by Visual Studio compiler in the remaining files and removed them, as well. Some examples are given below:
#if defined(__BORLANDC__) || defined(__MINGW32__)
# define XMD_H 1
#endif
#if 0
extern const int jpeg_zigzag_order[];
#endif
In many cases, the source code contains unused functions. They should be sent off to greener pastures, too.
Frequently, conditional compilation is used for creating specialized versions of the program. That is, some files contain #define as a compiler directive, while code in other files is enclosed in #ifdef and #endif. Example:
#define
#ifdef
#endif
/*jconfig.h for Microsoft Visual C++ on Windows 95 or NT. */
.....
#define BMP_SUPPORTED
#define GIF_SUPPORTED
.....
/* wrbmp.c */
....
#ifdef BMP_SUPPORTED
...
#endif /* BMP_SUPPORTED */
I would suggest selecting what to use straight away and get rid of conditional compilation. For example, should you decide that BMP format support is necessary, you should remove #ifdef BMP_SUPPORTED from the entire code base.
#ifdef BMP_SUPPORTED
If you do have to keep the possibility to create several versions of the program, you should make tests for every version. I suggest leaving around the most full version and work with it. After the transition is complete, you may add conditional compilation directives back in.
But we are not done working with preprocessor yet. It's necessary to find preprocessor commands that emulate functions and change them into real functions.
#define CACHE_STATE(tif, sp) do { \
BitAcc = sp->data; \
BitsAvail = sp->bit; \
EOLcnt = sp->EOLcnt; \
cp = (unsigned char*) tif->tif_rawcp; \
ep = cp + tif->tif_rawcc; \
} while (0)
To make a proper signature for a function, it is necessary to find out what are the types of all the arguments. Please note that BitAcc, BitsAvail, EOLcnt, cp and ep get assigned within the preprocessor command. These variables will become arguments of new functions and they should be passed by reference. That is, you should use uint32& for BitAcc in the function's signature.
BitAcc
BitsAvail
EOLcnt
cp
ep
uint32&
Programmers sometimes abuse preprocessor. Check out an example of such misuse:
; \
} \
}
In the code above, PEEK_BITS and DROP_BITS are also "functions", created similarly to HUFF_DECODE. In this case, the most reasonable approach is probably to include code of PEEK_BITS and DROP_BITS "functions" into HUFF_DECODE to ease transformation.
PEEK_BITS
DROP_BITS
HUFF_DECODE
You should go to the next stage of refining the code only when most harmless (as seen below) preprocessor directives are left.
#define DATATYPE_VOID 0
You can get rid of goto operators by introducing boolean variables and/or changing the code of a function. For example, if a function has a loop that uses goto to break out of it, then such construction could be changed to setting of a boolean variable, a break clause and a check of the variable's value after the loop.
goto
break
My next step is to scan the code for all the switch statements containing a case without a matching break.
switch
case
switch ( test1(buf) )
{
case -1:
if ( line != buf + (bufsize - 1) )
continue;
/* falls through */
default:
fputs(buf, out);
break;
}
This is allowed in C++, but not in C#. Such switch statements can be replaced with if blocks, or you can duplicate code if a fallthrough case takes up a couple of lines.
if
Everything that I described until now is not supposed to take up much time - not compared to what lies ahead. The first massive task that we're facing is combining of data and functions into classes. What we're aiming for is making every function a method of a class.
If the code was initially written in C++, it will probably contain few free (non-member) functions. In this case, a relationship between existing classes and free functions should be found. Usually, it turns out that free functions play an ancillary role for the classes. If a function is only used in one class, it can be moved into that class as a static method. If a function is used in several classes, then a new class can be created with this function as its static member.
static
If the code was created in C, there'll be no classes in it. They'll have to be created from the ground up, grouping functions around the data that they manipulate. Fortunately, this logical relationship is quite easy to figure out - especially if the C code is written using some OOP principles.
Let's examine the example below:
struct tiff
{
char* tif_name;
int tif_fd;
int tif_mode;
uint32 tif_flags;
......
};
...
extern int TIFFDefaultDirectory(tiff*);
extern void _TIFFSetDefaultCompressionState(tiff*);
extern int TIFFSetCompressionScheme(tiff*, int);
...
It's easy to see that the tiff struct begs to become a class and the three functions declared below - to be changed into public methods of this class. So, we're changing struct to class and the three functions to static methods of the class.
tiff
struct
public
class
As most functions become methods of different classes, it'll become easier to understand what to do with the remaining non-member functions. Don't forget that not all of the free functions will become public methods. There are usually a few ancillary functions not intended for use from the outside. These functions will become private methods.
private
After the free functions have been changed to static methods of classes, I suggest getting down to replacing calls to malloc/free functions with new/delete operators and adding constructors and destructors. Then static methods can be gradually turned into full-blown class methods. As more and more static methods are converted to non-static ones, it'll become clear that at least one of their arguments is redundant. This is the pointer to the original struct that has become the class. It may also turn out that some arguments of private methods can become member variables.
malloc/free
new/delete
Now that a set of classes replaced the set of functions and structs, it's time to get back to the preprocessor. That is, to defines like the one below (there should be no other ones remaining by now):
struct
#define STRIP_SIZE_DEFAULT 8192
Such defines should be turned into constants and you should find or create an owner class for them. The same as with functions, the newly-created constants may require creating a special new class for them (maybe, called Constants). As well as the functions, the constants may have to be public or private.
Constants
private
If the original code was written in C++, it may rely upon multiple inheritance. This is another thing to get rid of before converting code to C#. One way to deal with it is to change the class hierarchy in a way that excludes multiple inheritance. Another way is to make sure that all the base classes of a class that use multiple inheritance contain only pure virtual methods and contain no member variables. For example:
class A
{
public:
virtual bool DoSomething() = 0;
};
class B
{
public:
virtual bool DoAnother() = 0;
};
class C : public A, B
{ ... };
This kind of multiple inheritance can be easily transferred to C# by declaring A and B classes as interfaces.
Before going over to the next big-scale task (getting rid of pointer arithmetic), we should pay special attention to type synonyms declarations (typedef operator). Sometimes these are used as shorthand for proper types. For instance:
typedef
typedef vector<command* /> Commands;
I prefer to inline such declarations - that is, locate Commands in the code, change them to vector, and delete typedef.
Commands
vector
A more interesting case of using typedef is this:
typedef signed char int8;
typedef unsigned char uint8;
typedef short int16;
typedef unsigned short uint16;
typedef int int32;
typedef unsigned int uint32;
Mind the names of the types being created. It's obvious that typedef short int16 and typedef int int32 are somewhat of a hindrance, so it makes sense to change int16 to short and int32 to int in the code. Other typedefs, on the other hand, are quite useful. It's a good idea, however, to rename them so that they match type names in C#, like so:
typedef short int16
typedef int int32
int16
short
int32
int
typedefs
typedef signed char sbyte;
typedef unsigned char byte;
typedef unsigned short ushort
typedef unsigned int uint;
Special attention should be paid to the declarations similar to following one:
typedef unsigned char JBLOCK[64]; /* one block of coefficients */
This declaration defines a JBLOCK as array of 64 elements of the type unsigned char. I prefer to convert such declarations into classes. In other words, to create JBLOCK class that serves as a wrapper around array and implements methods to access the individual elements of the array. It facilitates better understanding of the way array of JBLOCKs (particularly 2- and 3-dimensional ones) are created, used and destroyed.
JBLOCK
unsigned char
JBLOCKs
Another large-scale task is getting rid of pointer arithmetic. Many C/C++ programs rely quite heavily on this feature of the language.
E.g.:
void horAcc32(int stride, uint* wp, int wc)
{
if (wc > stride) {
wc -= stride;
do {
wp[stride] += wp[0];
wp++;
wc -= stride;
} while ((int)wc > 0);
}
}
Such functions are to be rewritten, since pointer arithmetic is unavailable in C# by default. You may use such arithmetic in unsafe code, but such code has its disadvantages. That's why I prefer to rewrite such code using "index arithmetic". It goes like this:
void horAcc32(int stride, uint* wp, int wc)
{
int wpPos = 0;
if (wc > stride) {
wc -= stride;
do {
wp[wpPos + stride] += wp[wpPos];
wpPos++;
wc -= stride;
} while ((int)wc > 0);
}
}
The resulting function does the same job, but uses no pointer arithmetic and can be easily ported to C#. It could also be somewhat slower than the original, but again, this is not our priority for now.
Special attention should be paid to the functions that change pointers passed to them as arguments. Below is an example of such a function:
void horAcc32(int stride, uint* & wp, int wc)
In this case, changing wp in function horAcc32 changes the pointer in the calling function as well. Still, introducing an index would be a suitable approach here. You just need to define the index in the calling function and pass it to horAcc32 as an argument.
wp
horAcc32
void horAcc32(int stride, uint* wp, int& wpPos, int wc)
It is often convenient to convert int wpPos into a member variable.
int wpPos
After pointer arithmetic is out of the way, it is time to deal with function pointers (if there are any in code). Function pointers can be of three different types:
An example of the first type:
typedef int (*func)(int x, int y);
class Calculator
{
Calculator();
int (*func)(int x, int y);
static int sum(int x, int y) { return x + y; }
static int mul(int x, int y) { return x * y; }
public:
static Calculator* CreateSummator()
{
Calculator* c = new Calculator();
c->func = sum;
return c;
}
static Calculator* CreateMultiplicator()
{
Calculator* c = new Calculator();
c->func = mul;
return c;
}
int Calc(int x, int y) { return (*func)(x,y); }
};
In this case, functionality of the Calc method will vary depending on which one of CreateSummator and CreateMultiplicator methods was called to create an instance if the class. I prefer to create a private enum in the class that describes all possible choices for the functionality and the field that keeps a value from enum. Then, instead of a function pointer, I create a method that consists of a switch operator (or several ifs). The created method selects the necessary function based on the value of the field. The changed code:
Calc
CreateSummator
CreateMultiplicator
private enum
enum
ifs
class Calculator
{
enum FuncType
{ ftSum, ftMul };
FuncType type;
Calculator();
int func(int x, int y)
{
if (type == ftSum)
return sum(x,y);
return mul(x,y);
}
static int sum(int x, int y) { return x + y; }
static int mul(int x, int y) { return x * y; }
public:
static Calculator* createSummator()
{
Calculator* c = new Calculator();
c->type = ftSum;
return c;
}
static Calculator* createMultiplicator()
{
Calculator* c = new Calculator();
c->type = ftMul;
return c;
}
int Calc(int x, int y) { return func(x,y); }
};
You can choose another way: change nothing for the moment and use delegates at the time of transferring to C#.
An example for the second case (function pointers are created and used by different classes of the program):
typedef int (*TIFFVSetMethod)(TIFF*, ttag_t, va_list);
typedef int (*TIFFVGetMethod)(TIFF*, ttag_t, va_list);
typedef void (*TIFFPrintMethod)(TIFF*, FILE*, long);
class TIFFTagMethods
{
public:
TIFFVSetMethod vsetfield;
TIFFVGetMethod vgetfield;
TIFFPrintMethod printdir;
};
This situation is best resolved by turning vsetfield/vgetfield/printdir into virtual methods. Code that has used vsetfield/vgetfield/printdir will have to create a class derived from TIFFTagMethods with required implementation of the virtual methods.
vsetfield/vgetfield/printdir
TIFFTagMethods
An example of the third case (function pointers are created by users and passed into the program):
typedef int (*PROC)(int, int);
int DoUsingMyProc (int, int, PROC lpMyProc, ...);
Delegates are best suited here. That is, at this stage, while the original code is still being polished, nothing else should be done. At the later stage, when the project is transferred into C#, a delegate should be created instead of PROC, and the DoUsingMyProc function should be changed to accept an instance of the delegate as an argument.
PROC
DoUsingMyProc
The last change of the original code is the isolation of anything that may be a problem for the new compiler. It may be a code that actively uses standard C/C++ library (functions like fprintf, gets, atof and so on) or WinAPI. In C#, this will have to be changed to use .NET Framework methods or, if need be, p/invoke technique. Take a look at site in the latter case.
fprintf
gets
atof
"Problem code" should be localized as much as possible. To this end, you could create a wrapper class for the functions from C/C++ standard library or WinAPI. Only this wrapper will have to be changed later.
This is the moment of truth - the time to bring the changed code into the new project built using C# compiler. It's quite trivial, but labor-intensive. A new empty project is to be created, then the necessary classes should be added to that project and the code from the corresponding original classes copied into them.
You'll have to remove the ballast at this stage (like various #includes, for instance) and make some cosmetic modifications. "Standard" modifications include:
#includes
.h
.cpp
obj->method()
obj.method()
Class::StaticMethod
Class.StaticMethod
*
func(A* anInstance)
func(int& x)
func(ref int x)
Most of the modifications are not particularly complex, but some of the code will have to be commented out. Mostly the problem code that I discussed in part 2.9 will be commented out. The main goal here is to get C# code that compiles. It most probably won't work, but we'll come to that in due time.
After we made converted code compile, we need to adjust the code till the functionality matches the original. For that, we need to create a second set of tests that uses the converted code. The methods, commented out earlier, need to be carefully revised and rewritten using .NET Framework. I think this part needs no further explaining. I just want to expand on a few fine points.
When creating strings from byte arrays (and vice versa), a proper encoding should be selected carefully. Encoding.ASCII should be avoided due to its 7-bit nature. It means that bytes with values higher than 127 will become "?" instead of proper characters. It's best to use Encoding.Default or Encoding.GetEncoding("Latin1"). The actual choice of encoding depends on what happens next with the text or the bytes. If the text is to be displayed to the user - then Encoding.Default is a better choice, and if text is to be converted to bytes and saved into a binary file, then Encoding.GetEncoding("Latin1") suites better.
string
Encoding.ASCII
Encoding.Default
Encoding.GetEncoding("Latin1")
Output of formatted strings (code related to the family of printf functions in C/C++) may present certain problems. Functionality of the String.Format in .NET Framework is both poorer and different in syntax. This problem can be solved in two ways:
printf
String.Format
Check out "A printf implementation in C#" if you choose the first option.
I prefer the second option. If you will choose it too, then a search for "c# format specifiers" (without quotes) in Google and "Format Specifiers Appendix from C# in a Nutshell" may prove useful for you.
When all the tests that use the converted code will pass successfully, we can be sure that the conversion is completed. Now we can return to the fact that the code does not quite conform to the C# ideology (for example, the code full of get/set methods instead of properties) and deal with refactoring of the converted code. You may use profiler to identify bottlenecks in the code and optimize it. But that's quite a different story.
get
set
Happy port. | http://www.codeproject.com/Articles/130919/Adapting-old-code-to-new-realities?fid=1598159&df=90&mpp=10&noise=3&prof=True&sort=Position&view=None&spc=Relaxed&select=3679294&fr=1 | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
If you usually typing a password in a desktop application, it may be that a keylogger spies out your secrets. This is obviously not good. Screen keyboards might be a good solution, but again, there could be a screen capture program which watches effortlessly your passwords. Furthermore, screen keyboards are relative unhandily.
The following article presents a relatively simple principle, which prevents the keylogger to write down passwords entered. Basically, the used system is surprisingly easy and can therefore be transferred to other programming / operating systems / platforms, although under a small limitation.
The first question is: How can a program hide keystrokes? Perhaps there are some difficult ways to do this, but most probable this is not possible. We create an assumption: Entered characters necessarily mean the keylogger sees them. And that's what we use against the keylogger.
The second question is: How can a program generate keystrokes? This is normally possible. For example, we use in C# the class SendKeys, which provides methods for sending keystrokes. By the way, there can be the mentioned limitation because a website has not the authorization to produce keystrokes.
SendKeys
The third question is: How can we combine these two statements? At every time when the user types a character, the program generates some keytrokes more. The keylogger write down all characters both from user and program, but only the user and the program know the entire password. Unauthorized third parties see only letter salad and they can not decrypt the main password.
The fourth question is: Is this main system 100 % secure? Surprisingly and unfortunately no. The prinziple has many weak points, but there are also many solutions to close these gaps. I advise every developer to think about it before they add this concept to their code. Let me explain you the vulnerabilities and the solutions:
Creating random keystrokes after every character allows attackers to reproduce the typed password. Therefore the program must create identical keystrokes. Then again the produced keystrokes should not be identical for all passwords. In summary, we need a algorithm, which produces for every character always the same keystrokes. In addition to this, the length of the generated keystrokes should vary.
For all that, an attacker can create a table with all characters and their hash result either by reverse engineering or by testing. Using this table, he can decrypt the password relatively easy. To prevent this, the generated keystrokes should depend on a password identity, such as account name, account number, e-mail or computer specification. Unfortunately, this is only an obstacle, but no blockage for attackers.
At first, the create a new component SecureTextBox with some properties:
SecureTextBox
public class SecureTextBox : TextBox
{
/// <summary>
/// Gets the typed password.
/// </summary>
public string Password
{
get;
private set;
}
/// <summary>
/// Sets or gets the password ID.
/// </summary>
public string ID
{
get;
set;
}
}
The next step is to implement a constructor for initializing important events:
public SecureTextBox()
{
this.TextChanged += new EventHandler(SecureTextBox_TextChanged);
this.KeyDown += new KeyEventHandler(SecureTextBox_KeyDown);
this.KeyUp += new KeyEventHandler(SecureTextBox_KeyUp);
}
The methods SecureTextBox_KeyDown and SecureTextBox_KeyUp should ensure that no key is pressed, otherwise characters are inserted incorrectly or even not. The boolean variable IsTriggering declares if the user entered a character while he is holding another key.
IsTriggering
private int KeysPressed = 0;
private bool IsTriggering = false;
void SecureTextBox_KeyDown(object sender, KeyEventArgs e)
{
KeysPressed++;
}
void SecureTextBox_KeyUp(object sender, KeyEventArgs e)
{
KeysPressed--;
if (KeysPressed == 0 & IsTriggering)
this.SecureTextBox_TextChanged(null, null);
}
Now consider the random functions. For this example, I used Random combined with a given seed. The seed is created from the ID and the last entered character from user.
Random
ID
Random _NextSaltLength, _NextSaltChar;
// TODO: Extend the CharContent with all important characters!
string CharContent = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
private int NextSaltLength(bool CreateNew)
{
// TODO: Replace this code with your own function!
if (CreateNew)
_NextSaltLength = new Random(ID.GetHashCode() - Password[Password.Length - 1].GetHashCode());
return _NextSaltLength.Next(1, 4);
}
private string NextSaltChar(bool CreateNew)
{
// TODO: Replace this code with your own function!
if (CreateNew)
_NextSaltChar = new Random(ID.GetHashCode() + Password[Password.Length - 1].GetHashCode());
return CharContent[_NextSaltChar.Next(CharContent.Length)].ToString();
}
Finally, we can create the main method which manage the generation of the keystrokes:
private int RemainingSaltChars = 0;
private int LastTextLength = 0;
void SecureTextBox_TextChanged(object sender, EventArgs e)
{
if (KeysPressed > 0)
{
IsTriggering = true;
return;
}
IsTriggering = false;
if (LastTextLength < this.TextLength)
{
LastTextLength = this.TextLength;
if (RemainingSaltChars > 0)
{
if (RemainingSaltChars > 1)
SendKeys.Send(this.NextSaltChar(false));
RemainingSaltChars--;
}
else
{
this.Password += this.Text[this.TextLength - 1];
RemainingSaltChars = this.NextSaltLength(true);
SendKeys.Send(this.NextSaltChar(true));
}
}
else
{
this.ResetText();
LastTextLength = 0;
}
}
Here is a short description of the implementation: If no keys are pressed and the user entered a character in the TextBox, the char is saved, the quantity of the next generated characters are randomly calculated and the first character is send through SendKeys.Send(). Now the program is in a complicated loop and generates the specified keystrokes. This state is interrupted, if RemainingSaltChars is zero. Just as a footnote, if the user presses Delete or Backspace, the text will be reset because otherwise this would mix up the algorithm.
TextBox
SendKeys.Send()
RemainingSaltChars
public override void ResetText()
{
base.ResetText();
this.Password = String.Empty;
}
Originally I have this prinziple from a software, which polls for a password at startup. At first I wondered why more masked characters apperared in the textbox than typed until I figured it out. I implemented my thoughts and experimented with my code using a keylogger. I was very surprised about the effectivity. Then I wanted to analyse how good the other application works while using a keylogger. But the program produced no keystrokes. Unbelievable! I found this very amusing.
In the introduction I said that this prinziple could not be implemented by a website. A solution for this would be a extra Add-On for the browser, which could take on this task.
Published on 19 January. | http://www.codeproject.com/Articles/529676/How-to-make-keyloggers-life-difficult?fid=1824923&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2014-23 | en | refinedweb |
fabs, fabsf, fabsl - absolute value function
#include <math.h>
double fabs(double x);
float fabsf(float x);
long double fabsl absolute value of their argument x,| x|.
Upon successful completion, these functions shall return the absolute value of x.
[MX]
If x is NaN, a NaN shall be returned.If x is NaN, a NaN shall be returned.
If x is ±0, +0 shall be returned.
If x is ±Inf, +Inf shall be returned.
No errors are defined.
Computing the 1-Norm of a Floating-Point Vector
This example shows the use of fabs() to compute the 1-norm of a vector defined as follows:norm1(v) = |v[0]| + |v[1]| + ... + |v[n-1]|
where |x| denotes the absolute value of x, n denotes the vector's dimension v[i] denotes the i-th component of v (0<=i<n).#include <math.h>
double norm1(const double v[], const int n) { int i; double n1_v; /* 1-norm of v */
n1_v = 0; for (i=0; i<n; i++) { n1_v += fabs (v[i]); }
return n1_v; }
None.
None.
None.
isnan(), the Base Definitions volume of IEEE Std 1003.1-2001, <math.h>
First released in Issue 1. Derived from Issue 1 of the SVID.
The DESCRIPTION is updated to indicate how an application should check for an error. This text was previously published in the APPLICATION USAGE section.
The fabsf() and fabsl()/27 is applied, adding the example to the EXAMPLES section. | http://pubs.opengroup.org/onlinepubs/000095399/functions/fabsf.html | CC-MAIN-2014-23 | en | refinedweb |
I am doing MIDlet optimization, trying to fit all the things in 64Kb.
While doing it I constantly reingineer various methods.
Is there any way or tool to analyze class files, to show which methods consume lots of space?
I am doing MIDlet optimization, trying to fit all the things in 64Kb.
While doing it I constantly reingineer various methods.
Is there any way or tool to analyze class files, to show which methods consume lots of space?
I just use this:
javap -c ClassName | findstr "^Method return$"
It just gives a list of method names, and the offset to any return instructions - the last one of which will be the last instruction in the method, so gives you a fair indication of the method size. (It doesn't take account of exception tables, but it's a reasonably good guide to which methods are huge.)
graham
Could you give some live example?
I never used javap.
Javap itself gives me a list of methods without any offsets.
passing it to findstr doesnt give any ouput at all
javap extracts a variety of data from java class files. You can get an overview of the options with javap -help. To use with many MIDlet classes, you may have to add in a reference to the MIDP API classes. That would be something like:
javap -bootclasspath C:\apps\WTK104\wtklib\emptyapi.zip MyClass
The -c option provides disassembly of the methods. (I also often use the -l option to map method offsets (from exception reports) into line-numbers.)
For example, I have an implementation of StringTokenizer that I use in MIDlets, as it's missing from the MIDP API. If I use:
javap -c StringTokenizer
I get an output like:
Method boolean hasMoreElements()
0 aload_0
1 invokevirtual #17 <Method boolean hasMoreTokens()>
4 ireturn
Method java.lang.Object nextElement()
0 aload_0
1 invokevirtual #16 <Method java.lang.String nextToken()>
4 areturn
and so on. findstr is the NT equivalent of Unix's grep. I've given two patterns: ^Method - which pulls out any line starting with the text "Method"; and return$ - which pulls out any line ending with the text "return". (The ^ matches the start of a line, $ matches the end). findstr is case-sensitive. So:
javap -c StringTokenizer | findstr "^Method return$"
lists:
Method StringTokenizer(java.lang.String,java.lang.String,boolean)
79 return
Method StringTokenizer(java.lang.String,java.lang.String)
7 return
Method StringTokenizer(java.lang.String)
8 return
Method int findNextTokenStart(int)
87 ireturn
Method boolean hasMoreTokens()
16 ireturn
Method java.lang.String nextToken()
95 areturn
Method java.lang.String nextToken(java.lang.String)
9 areturn
Method int countTokens()
28 ireturn
Method boolean hasMoreElements()
4 ireturn
Method java.lang.Object nextElement()
4 areturn
Since the numbers listed are the offsets (in bytes) to the instruction, and the last instruction in each method is always a return (unless anyone knows otherwise!), this gives you some idea of the size of the method code. (There are sometimes multiple "return"s in a method, but the last one is still the last instruction).
It doesn't tell the whole story. In my tests, I find that a method with a single character name, no parameters and no code (the compiler will still generate a return instruction) increases the size of a class file by 33 bytes. And javap does not list constant-pool entries - it is difficult to include these in method size evaluations, as they are often shared between methods (where two methods include the same String constant, or invoke the same external method, for example).
Graham.
graham
Thanx a lot! Sounds like useful practice.
I also tried searching web for some tools, that would automate and simplify procedure.
Found this free Code Explorer :
Basically it shows length of class methods and other class information.
If you got used to javap, it might be not very attractive for you, but novises in disassembly should definetely consider this tool. | http://developer.nokia.com/community/discussion/showthread.php/26525-Methods-that-take-lots-of-space | CC-MAIN-2014-23 | en | refinedweb |
Dino Esposito
Download the Code Sample
In the past two installments of this column I discussed how to build an ASP.NET solution for the evergreen problem of monitoring the progress of a remote task from the client side of a Web application. Despite the success and adoption of AJAX, a comprehensive and widely accepted solution for displaying a context-sensitive progress bar within a Web application without resorting to Silverlight or Flash is still lacking.
To be honest, there aren't many ways in which one can accomplish this. You might craft your own solution if you want, but the underlying pattern won’t be that different from what I presented—specifically targeting ASP.NET MVC—in the past columns. This month, I’m back to the same topic, but I’ll discuss how to build a progress bar using a new and still-in-progress library: SignalR.
SignalR is a Microsoft .NET Framework library and jQuery plug-in being developed by the ASP.NET team, possibly to be included in future releases of the ASP.NET platform. It presents some extremely promising functionality that's currently missing in the .NET Framework and that more and more developers are demanding.
SignalR is an integrated client-and-server library that enables browser-based clients and ASP.NET-based server components to have a bidirectional and multistep conversation. In other words, the conversation isn’t limited to a single, stateless request/response data exchange; rather, it continues until explicitly closed. The conversation takes place over a persistent connection and lets the client send multiple messages to the server and the server reply—and, much more interesting—send asynchronous messages to the client.
It should come as no surprise that the canonical demo I’ll use to illustrate the main capabilities of SignalR is a chat application. The client starts the conversation by sending a message to the server; the server—an ASP.NET endpoint—replies and keeps listening for new requests.
SignalR is specifically for a Web scenario and requires jQuery 1.6 (or newer) on the client and ASP.NET on the server. You can install SignalR via NuGet or by downloading the bits directly from the GitHub repository at github.com/SignalR/SignalR. Figure 1 shows the NuGet page with all SignalR packages. At minimum, you need to download SignalR, which has dependencies on SignalR.Server for the server-side part of the framework, and SignalR.Js for the Web-client part of the framework. The other packages you see in Figure 1 serve more specific purposes such as providing a .NET client, a Ninject dependency resolver and an alternate transportation mechanism based on HTML5 Web sockets.
Figure 1 SignalR Packages Available on the NuGet Platform
Before I attempt to build a progress bar solution, it would be useful to get familiar with the library by taking a look at the chat example distributed with the downloadable source code (archive.msdn.microsoft.com/mag201203CuttingEdge) and other information referenced in the (few) related posts currently available on the Web. Note, though, that SignalR is not a released project.
In the context of an ASP.NET MVC project, you start by referencing a bunch of script files, as shown here:
<script src="@Url.Content("~/Scripts/jquery-1.6.4.min.js")"
type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.signalr.min.js")"
type="text/javascript"></script>
<script src="@Url.Content("~/signalr/hubs")"
type="text/javascript"></script>
Note that there’s nothing specific to ASP.NET MVC in SignalR, and the library can be used equally well with Web Forms applications.
An interesting point to emphasize is that the first two links reference a specific script file. The third link, instead, still references some JavaScript content, but that content is generated on the fly—and that depends on some other code you have within the host ASP.NET application. Also note that you need the JSON2 library if you intend to support versions of Internet Explorer prior to version 8.
Upon the page loading, you finalize the client setup and open the connection. Figure 2 shows the code you need. You might want to call this code from within the ready event of jQuery. The code binds script handlers to HTML elements—unobtrusive JavaScript—and prepares the SignalR conversation.
Figure 2 Setting Up the SignalR Library for a Chat Example
<script type="text/javascript">
$(document).ready(function () { // Add handler to Send button
$("#sendButton").click(function () {
chat.send($('#msg').val());
});
// Create a proxy for the server endpoint
var chat = $.connection.chat;
// Add a client-side callback to process any data
// received from the server
chat.addMessage = function (message) {
$('#messages').append('<li>' + message + '</li>');
};
// Start the conversation
$.connection.hub.start();
});
</script>
It's worth noting that the $.connection object is defined in the SignalR script file. The chat object, in contrast, is a dynamic object in the sense that its code is generated on the fly and is injected into the client page via the Hub script reference. The chat object is ultimately a JavaScript proxy for a server-side object. At this point it should be clear that the client code in Figure 2 means (and does) little without a strong server-side counterpart.
The ASP.NET project should include a reference to the SignalR assembly and its dependencies such as Microsoft.Web.Infrastructure. The server-side code includes a managed class that matches the JavaScript object you created. With reference to the code in Figure 2, you need to have a server-side object with the same interface as the client-side Chat object. This server class will inherit from the Hub class defined in the SignalR assembly. Here’s the class signature:
using System;
using SignalR.Hubs;
namespace SignalrProgressDemo.Progress
{
public class Chat : Hub
{
public void Send(String message)
{
Clients.addMessage(message);
}
}
}
Every public method in the class must match a JavaScript method on the client. Or, at least, any method invoked on the JavaScript object must have a matching method on the server class. So the Send method you see invoked in the script code of Figure 2 ends up placing a call into the Send method of the Chat object, as defined earlier. To send data back to the client, the server code uses the Clients property on the Hub class. The Clients member is of type dynamic, which enables it to reference dynamically determined objects. In particular, the Clients property contains a reference to a server-side object built after the interface of the client object: the Chat object. Because the Chat object in Figure 2 has an addMessage method, the same addMessage method is expected to be exposed also by the server-side Chat object.
Now let’s use SignalR to build a notification system that reports to the client any progress being made on the server during a possibly lengthy task. As a first step, let’s create a server-side class that encapsulates the task. The name you assign to this class, while arbitrarily chosen, will affect the client code you’ll write later. This simply means you have one more reason to choose the class name with care. Even more important, this class will inherit from a SignalR provided class named Hub. Here’s the signature:
public class BookingHub : Hub
{
...
}
The BookingHub class will have a few public methods—mostly void methods accepting any sequence of input parameters that makes sense for their intended purpose. Every public method on a Hub class represents a possible endpoint for the client to invoke. As an example, let’s add a method to book a flight:
public void BookFlight(String from, String to)
{
...
}
This method is expected to contain all the logic that executes the given action (that is, booking a flight). The code will also contain at various stages calls that in some way will report any progress back to the client. Let’s say the skeleton of method BookFlight looks like this:
public void BookFlight(String from, String to)
{
// Book first leg var ref1 = BookFlight(from, to); // Book return flight
var ref2 = BookFlight(to, from);
// Handle payment
PayFlight(ref1, ref2);
}
In conjunction with these main operations, you want to notify the user about the progress made. The Hub base class offers a property called Clients defined to be of type dynamic. In other words, you’ll invoke a method on this object to call back the client. The form and shape of this method, though, are determined by the client itself. Let’s move to the client, then.
As mentioned, in the client page you’ll have some script code that runs when the page loads. If you use jQuery, the $(document).ready event is a good place for running this code. First, you get a proxy to the server object:
var bookingHub = $.connection.bookingHub;
// Some config work
...
// Open the connection
$.connection.hub.start();
The name of the object you reference on the $.connection SignalR native component is just a dynamically created proxy that exposes the public interface of the BookingHub object to the client. The proxy is generated via the signalr/hubs link you have in the <script> section of the page. The naming convention used for names is camelCase, meaning that class BookingHub in C# becomes object bookingHub in JavaScript. On this object you find methods that match the public interface of the server object. Also, for methods, the naming convention uses the same names, but camelCased. You can add a click handler to an HTML button and start a server operation via AJAX, as shown here:
bookingHub.bookFlight("fco", "jfk");
You can now define client methods to handle any response. For example, you can define on the client proxy a displayMessage method that receives a message and displays it through an HTML span tag:
bookingHub.displayMessage = function (message) {
$("#msg").html(message);
};
Note that you’re responsible for the signature of the displayMessage method. You decide what’s being passed and what type you expect any input to be.
To close the circle, there’s just one final issue: who’s calling displayMessage and who’s ultimately responsible for passing data? It’s the server-side Hub code. You call displayMessage (and any other callback method you want to have in place) from within the Hub object via the Clients object. Figure 3 shows the final version of the Hub class.
Figure 3 The Final Version of the Hub Class
public void BookFlight(String from, String to)
{
// Book first leg
Clients.displayMessage( String.Format("Booking flight: {0}-{1} ...", from, to));
Thread.Sleep(2000);
// Book return
Clients.displayMessage( String.Format("Booking flight: {0}-{1} ...", to, from));
Thread.Sleep(3000);
// Book return
Clients.displayMessage( String.Format("Booking flight: {0}-{1} ...", to, from));
Thread.Sleep(2000);
// Some return value
Clients.displayMessage("Flight booked successfully.");
}
Note that in this case, the displayMessage name must match perfectly the case you used in the JavaScript code. If you mistype it to something such as DisplayMessage, you won’t get any exception—but no code will execute, either.
The Hub code is implemented as a Task object, so it gets its own thread to run and doesn’t affect the ASP.NET thread pool.
If a server task results in asynchronous work being scheduled, it will pick up a thread from the standard worker pool. The advantage is, SignalR request handlers are asynchronous, meaning that while they’re in the wait state, waiting for new messages, they aren’t using a thread at all. When a message is received and there’s work to be done, an ASP.NET worker thread is used.
In past columns, as well as in this one, I used the term progress bar frequently without ever showing a classic gauge bar as an example of the client UI. Having a gauge bar is only a nice visual effect and doesn’t require more complex code in the async infrastructure. However, Figure 4 shows the JavaScript code that builds a gauge bar on the fly given a percentage value. You can change the appearance of the HTML elements via proper CSS classes.
Figure 4 Creating an HTML-Based Gauge Bar
var GaugeBar = GaugeBar || {};
GaugeBar.generate = function (percentage) {
if (typeof (percentage) != "number")
return;
if (percentage > 100 || percentage < 0)
return;
var colspan = 1;
var markup = "<table class='gauge-bar-table'><tr>" +
"<td style='width:" + percentage.toString() +
"%' class='gauge-bar-completed'></td>";
if (percentage < 100) {
markup += "<td class='gauge-bar-tobedone' style='width:" +
(100 - percentage).toString() +
"%'></td>";
colspan++;
}
markup += "</tr><tr class='gauge-bar-statusline'><td colspan='" +
colspan.toString() +
"'>" +
percentage.toString() +
"% completed</td></tr></table>";
return markup;
}
You call this method from a button click handler:
bookingHub.updateGaugeBar = function (perc) {
$("#bar").html(GaugeBar.generate(perc));
};
The updateGaugeBar method is therefore invoked from another Hub method that just uses a different client callback to report progress. You can just replace displayMessage used previously with updateGaugeBar within a Hub method.
I presented SignalR primarily as an API that requires a Web front end. Although this is probably the most compelling scenario in which you might want to use it, SignalR is in no way limited to supporting just Web clients. You can download a client for .NET desktop applications, and another client will be released soon to support Windows Phone clients.
This column only scratched the surface of SignalR in the sense that it presented the simplest and most effective approach to program it. In a future column, I’ll investigate some of the magic it does under the hood and how packets are moved along the wire. Stay tuned.
Dino Esposito is the author of “Programming Microsoft ASP.NET MVC3” (Microsoft Press, 2011) and coauthor of “Microsoft .NET: Architecting Applications for the Enterprise” (Microsoft Press, 2008). Based in Italy, he’s a frequent speaker at industry events worldwide. You can follow him on Twitter at twitter.com/despos.
Thanks to the following technical expert for reviewing this article: Damian Edwards
I (similar to @toddysm) could not find the Chat app in the downloadable. I Googled it and I guess it is the one here: Dror
Never mind - found it :)
Where is the chat example that you say is distributed with the downloadable code? It doesn't seem to be packed with the C# code.
I have IE9 and win7, and it works smoothly in FireFox too, but strange ... It you have several opened browsers with the same url and if you start operation on one all other suddenly display feedback?????
Downloaded the code sample and tried to run but got the error message below: "SignalR: Connection must be started before data can be sent. Call .start() before .send()" Don't know what might be missing. Please help. I am using VS 2010 and running on IE8 and win 7.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | http://msdn.microsoft.com/en-us/magazine/hh852586.aspx | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
This article presents a class, handling sending of input (mouse and keyboard) to any running Windows Control by its handle.
Some time ago, I was making a program which would win a penalty shootout soccer flash game for me. My task in the application was repeatedly to do these things:
I found some great articles concerning taking of screenshot like this one.
This library helped me with image processing a lot.
On the other hand, I spent a lot of time trying to find out how to send mouse input to flash window correctly... and when I got how to do it, I decided to make this library.
In order to use the TakeOver class, you need to obtain the handle (int number) of the window you want to control. It's possible to use spy++ tool from Visual Studio and convert HEX value to DEC using Windows calculator. The other possibility is to implement your own Window Picker (like in the demo project). Further details about this problem are out of scope for this article.
TakeOver
int number
The first thing you need to do to take control over the window is to create a TakeOver class instance.
Remo.TakeOver tO = new Remo.TakeOver(targetWindowHandle);
Sending of input messages to a window is quite self-explanatory. The class provides these methods for sending messages:
public void SendLeftButtonDown(int x,int y);
public void SendLeftButtonUp(int x,int y);
public void SendLeftButtonDblClick(int x,int y);
public void SendRightButtonDown(int x,int y);
public void SendRightButtonUp(int x,int y);
public void SendRightButtonDblClick(int x,int y);
public void SendMouseMove(int x,int y);
public void SendKeyDown(int key);
public void SendKeyUp(int key);
public void SendChar(char c);
public void SendString(string s);
Sending input usually requires a target window to be focused to work properly, however it's not always so. The demo application can be tested here. Normally, you will have the application running in the background with controlled window focused, so there should be no problem with sending input messages. Since keyboard messages are not very reliable (I do not know how to encode lParam of SendMessage correctly, any hints?), it is recommended for keyboard input to use the SendKeys class once the window is focused via the SetFocus() method. Sendkeys is part of the standard System.Windows.Forms namespace. To focus target window TakeOver class provides a method:
lParam
SendMessage
SendKeys
SetFocus()
Sendkeys
System.Windows.Forms
public void SetFocus();
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
SendKeys.SendWait(^l);
SetFocus();
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/14656/Sending-Input-Messages-to-Other-Windows-or-How-To?msg=2015417 | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
The jQuery DataTables plug-in is an excellent client-side component that can be used to create rich-functional tables in the web browser. This plug-in adds lot of functionalities to the plain HTML tables that are placed in web pages such as filtering, paging, sorting, changing page length, etc.
This article shows how the jQuery DataTables plug-in can be integrated into an ASP.NET MVC application. It contains step by step examples that show how the DataTables plug-in interacts with server-side components.
This article do not cover all possible integration scenarios of JQuery DataTables plugin in ASP.NET MVC application. For other integation scenarios, you might also take a look at the other articles in this serie:
Also, if you want to see all possible configurations of the JQuery DataTables plugin you might take a look at the Enhancing HTML tables using the jQuery DataTables plug-in article where are described many usefull configuration options.
Currently there are several components that can help developers to create effective and functional-rich tables on the web pages. Some of them are server-side components such as standard ASP.NET GridView or other similar components that generate HTML code and attach events which postback a request to the server and where user actions are handled by the sever code and a new table is generated. The other group of components, such as jQuery DataTables, FlexGrid, or jqGrid are implemented as client-side code. These plugins take the plain HTML tables as the one shown on the following figure and add various enhancements.
As an example, if you apply JQuery DataTables plugin to this plain HTML table, you will get something like the table on the following figure: pagination that enables the user to navigate through the pages and text that automatically displays which records are currently displayed. All these functionalities are added by default and all you need is a single line of code:
$('#myDataTable').dataTable();
Under assumption that the plain table shown on the first figure has an id "myDataTable", this code will enhance the table with DataTables plugin. Most of these functionalities can be used completely separate from the server-side code, i.e., the web server can generate a plain HTML table in standard format in any server-side technology such as ASP.NET Web Forms, ASP.NET MVC, PHP, Java etc. The client-side JavaScript components will use whatever gets generated and add client-side functionalities. In this client-side mode, DataTables takes all the table rows from the <tbody></tbody> section and performs filtering, paging, and sorting directly on these elements as on in-memory objects. This is the fastest way to use DataTables but it requires that the server returns all the data in a single call, loads all these rows as in-memory JavaScript objects, and render them dynamically in DOM. This might cause performance issues with server calls and memory usage on the client. However, this minimizes the number of requests sent to the server because once the table is loaded, the server is not used at all.
<tbody></tbody>
If you are interested in using the JQuery DataTables plugin in pure client-side mode, then you do not need to read this article. All you need to do is to generate a plain HTML table and apply plugin. You can use various configuration options in the plugin so if you are interested in this mode you might read the following article "Enhancing HTML tables using the jQuery DataTables plug-in" where I have explained various configuration options of DataTables plugin.
Theme of this article is using the JQuery DataTables plugin in the server-side processing mode.
It is possible to implement client-server interaction by configuring DataTables to query the server via AJAX calls in order to fetch the required data. In this case, table that is generated on the client side is initially empty as the one shown in the following example:
<table id="myDataTable" class="display">
<thead>
<tr>
<th>ID</th>
<th>Company name</th>
<th>Address</th>
<th>Town</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
As you might notice, this "table" do not have any rows in it. In order to apply the plugin to this "table" you will need to call something like a following code:
$('#myDataTable').dataTable({
"bServerSide": true,
"sAjaxSource": "server_processing.php"
});
In this code is used the server-side processing mode by setting the bServerSide parameter to true. In this mode, DataTables plugin will load table data from the remote URL using the Ajax request. The second parameter defines to what URL DataTables plugin should send Ajax request in order to load the data into the table.
bServerSide
Once the plug-in is applied on such of table, it will call the server side page (server_processing.php in the example above), post information about the required data, take the response from the server, and load the table data dinamically. The server response is formatted as a JSON object, parsed on the client side, and displayed in the table body. The following figure shows a trace of the calls sent to the server (captured using the Firebug add-in for Firefox).
In this case, each event (changing the number of items that should be displayed per page, entering a keyword in the search filter, sorting, pressing the pagination button, etc.) triggers the DataTables plug-in to send information about the current page, search filter, and sort column to the server page. As shown in the third server_processing.php page and sends information about the user action. A dull example of the server-side configuration of the jQuery DataTables plug-in can be found here. A major problem with the server-side mode is the implementation of the server-side logic that accepts parameters from the client-side component, performs action, and returns the data as expected. This article explains how to configure jQuery DataTables and implement server-side logic with ASP.NET MVC controllers.
The first thing you need to do is to create a standard ASP.NET Model-View-Controller structure. There are three steps required for this setup:
A simple application that keeps information about companies and displays them in a table will be used as an example. This simple table will be enhanced with the jQuery DataTables plug-in and configured to take all the necessary data from the server-side. The following JavaScript components need to be downloaded:
These files should be stored in the local file system and included in the HTML page that is rendered on the client. An example of usage of these files is explained below.
The Model comes to a simple class containing company data. The fields that we need are company ID, name, address, and town. The source code of the company model class is shown below:
public class Company
{
public int ID { get; set; }
public string Name { get; set; }
public string Address { Integration</title>
<link href="~/Content/dataTables/demo_table.css"
rel="stylesheet" type="text/css" />
<script src="~/Scripts/jQuery-1.4.4.min.js"
type="text/javascript"></script>
<script src="~/Scripts/jQuery.dataTables.min.js"
type="text/javascript"></script>
<script src="~/Scripts/index.js"
type="text/javascript"></script>
</head>
<body>
<div id="container">
<div id="demo">
<h2>Index</h2>
<table id="myDataTable" class="display">
<thead>
<tr>
<th>ID</th>
<th>Company name</th>
<th>Address</th>
<th>Town</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
</div>
</div>
</body>
</html>
The view engine used is Razor but any other view engine can be used instead, as the engine specific code is only setting the layout page on the top of the page. The page includes all the necessary JavaScript libraries and renders an empty table. Data that should be displayed is not bound on the server-side. Therefore, the table body is not needed as data is going to be pulled from the server. In client side mode, the <tbody></tbody> section would contain rows that should be displayed on the page. However, in server-side mode, data is dynamically taken via AJAX calls. Since all processing and display is done on the client-side in the browser, the usage of the server-side template engine is irrelevant. However, in a real situation, if we should bind some dynamic data on the server-side, we could use any MVC template engine such as ASPX, Razor, Spark, or NHaml. The View includes the standard jQuery and DataTables libraries required to initialize a table, as well as the standard DataTables CSS file which can be replaced with any custom style-sheet. Code that initializes the DataTables plugin should be placed in the included index.js file as shown below:
$(document).ready(function () {
$('#myDataTable').dataTable({
"bServerSide": true,
"sAjaxSource": "Home/AjaxHandler",
"bProcessing": true,
"aoColumns": [
{ "sName": "ID",
"bSearchable": false,
"bSortable": false,
"fnRender": function (oObj) {
return '<a href=\"Details/' +
oObj.aData[0] + '\">View</a>';
}
},
{ "sName": "COMPANY_NAME" },
{ "sName": "ADDRESS" },
{ "sName": "TOWN" }
]
});
});
The initialization code is placed in the standard jQuery document ready wrapper. It finds the table with the myDataTable ID and the magic begins. By setting the bServerSide parameter to true, DataTables is initialized to work with the server-side page. Another parameter, sAjaxSource, should point to an arbitrary URL of the page that will provide data to client-side table ("Home/AjaxHandler" in this example). The parameter bProcessing tells DataTables to show the "Processing..." message while the data is being fetched from the server, while aoColumns defines the properties of the table columns (e.g., whether they can be used for sorting or filtering, whether some custom function should be applied on each cell when it is rendered etc. - more information about DataTables properties can be found on the DataTables site) and it is not directly related to the client-server setup of DataTables.
myDataTable
bServerSide
true
sAjaxSource
bProcessing
aoColumns
Since there is no server-side processing, the controller class is also fairly simple and it practically does nothing. The controller class used in the example is shown below:
public class HomeController : Controller
{
public ActionResult Index()
{
return View();
}
}
As shown in the snippet, the controller just waits for someone to call the "Home/Index" URL and forwards the request to the Index view. All data processing is done in the Home/AjaxHandler controller action.
Once the table has been initialized, it is necessary to implement server-side logic that will provide data to DataTables. The server-side service will be called (by jQuery DataTables) each time data should be displayed. Since the DataTables configuration declared "Home/AjaxHandler" as URL that should be used for providing data to the DataTable, we need to implement an AjaxHandler action in the Home controller that will react to the Home/AjaxHandler calls. For example:
public class HomeController : Controller
{
public ActionResult AjaxHandler(jQueryDataTableParamModel param)
{
return Json(new{
sEcho = param.sEcho,
iTotalRecords = 97,
iTotalDisplayRecords = 3,
aaData = new List<string[]>() {
new string[] {"1", "Microsoft", "Redmond", "USA"},
new string[] {"2", "Google", "Mountain View", "USA"},
new string[] {"3", "Gowi", "Pancevo", "Serbia"}
}
},
JsonRequestBehavior.AllowGet);
}
}
The Action method returns a dummy 3x4 array that simulates information expected by the DataTable plug-in, i.e., the JSON data containing the number of total records, the number of records that should be displayed, and a two dimensional matrix representing the table cells. For example:
{ "sEcho":"1",
"iTotalRecords":97,
"iTotalDisplayRecords":3,
"aaData":[ ["1","Microsoft","Redmond","USA"],
["2","Google","Mountain View","USA"],
["3","Gowi","Pancevo","Serbia"]
]
}
Values that the server returns to the DataTables plug-in are:
sEcho
iTotalRecords
iTotalDisplayedRecords
aaData
Once DataTables is initialized, it calls the Home/AjaxHandler URL with various parameters. These parameters can be placed in the method signature so MVC can map them directly, or accessed via the Request object as in standard ASP.NET, but in this example, they are encapsulated in the JQueryDataTableParamModel class given below.
Request
JQueryDataTableParamModel
/// <summary>
/// Class that encapsulates most common parameters sent by DataTables plugin
/// </summary>
public class jQueryDataTableParamModel
{
/// <summary>
/// Request sequence number sent by DataTable,
/// same value must be returned in response
/// </summary>
public string sEcho{ get; set; }
/// <summary>
/// Text used for filtering
/// </summary>
public string sSearch{ get; set; }
/// <summary>
/// Number of records that should be shown in table
/// </summary>
public int iDisplayLength{ get; set; }
/// <summary>
/// First record that should be shown(used for paging)
/// </summary>
public int iDisplayStart{ get; set; }
/// <summary>
/// Number of columns in table
/// </summary>
public int iColumns{ get; set; }
/// <summary>
/// Number of columns that are used in sorting
/// </summary>
public int iSortingCols{ get; set; }
/// <summary>
/// Comma separated list of column names
/// </summary>
public string sColumns{ get; set; }
}
The DataTables plug-in may send some additional parameters, but for most purposes, the mentioned parameters should be enough.
The first example of server-side processing implementation shown in this article is a response to the initial call. Immediately after initialization, DataTables sends the first call to the sAjaxSource URL and shows the JSON data returned by that page. The implementation of the method that returns the data needed for initial table population is shown below:
public ActionResult AjaxHandler(jQueryDataTableParamModel param)
{
var allCompanies = DataRepository.GetCompanies();
var result = from c in allCompanies
select new[] { c.Name, c.Address, c.Town };
return Json(new { sEcho = param.sEcho,
iTotalRecords = allCompanies.Count(),
iTotalDisplayRecords = allCompanies.Count(),
aaData = result
},
JsonRequestBehavior.AllowGet);
}
The list of all companies is fetched from the repository; they are formatted as a two-dimensional matrix containing the cells that should be shown in the table, and sent as a JSON object. The parameters iTotalRecords and iTotalDisplayRecords are equal to the number of companies in the list as this is the number of records that should be shown and the number of total records in a data set. The only parameter used from the request object is sEcho, and it is just returned back to DataTables. Although this server action is good enough to display initial data, it does not handle other data table operations such as filtering, ordering, and paging.
iTotalDisplayRecords
DataTables plugin adds a text box in the table, so the user can filter the results displayed in the table by entering a keyword. Text box used for filtering is shown on the following figure:
In server-side processing mode, each time the user enters some text in the text box, DataTables sends a new AJAX request to the server-side expecting only those entries that match the filter. DataTables plugin sends the value entered in the filter text box in the sSearch HTTP request parameter. In order to handle the user request for filtering, AjaxHandler must be slightly modified, as it is shown in the following listing:
sSearch
AjaxHandler
public ActionResult AjaxHandler(jQueryDataTableParamModel param)
{
var allCompanies = DataRepository.GetCompanies();
IEnumerable<Company> filteredCompanies;
if (!string.IsNullOrEmpty(param.sSearch))
{
filteredCompanies = DataRepository.GetCompanies()
.Where(c => c.Name.Contains(param.sSearch)
||
c.Address.Contains(param.sSearch)
||
c.Town.Contains(param.sSearch));
}
else
{
filteredCompanies = allCompanies;
});
In the given example, we use a LINQ query to filter the list of companies by the param.sSearch value. DataTables plugin sends the keyword entered in the text box in the sSearch parameter. The filtered companies are returned as JSON results. The number of all records and the records that should be displayed are returned as well.
param.sSearch
DataTables can use multiple column based filters instead of a single filter that is applied on the whole table. Detailed instructions for setting a multi-column filter can be found on the DataTables site (multi-filtering example). When multi-column filtering is used, in the table footer are added separate text boxes for filtering each individual columns, as it is shown on the following figure:
In multi-column filtering configuration, DataTables sends individual column filters to the server side in request parameters sSearch_0, sSearch_1, etc. The number of request variables is equal to the iColumns variable. Also, instead of the param.sSearch value, you may use particular values for columns as shown in the example:
sSearch_0
sSearch_1
iColumns
//Used if particular columns are separately filtered
var nameFilter = Convert.ToString(Request["sSearch_1"]);
var addressFilter = Convert.ToString(Request["sSearch_2"]);
var townFilter = Convert.ToString(Request["sSearch_3"]);
DataTables initialization settings could specify whether a column is searchable or not (the ID column is not searchable in the previous example). DataTables also sends additional parameters to the server-side page so server side component can determine which fields are searchable at all. In the configuration used in this article, DataTables sends the individual column filters to server as request parameters (bSearchable_0, bSearchable_1, etc). The number of request variables is equal to the iColumns variable.
bSearchable_0
bSearchable_1
//Optionally check whether the columns are searchable at all
var isIDSearchable = Convert.ToBoolean(Request["bSearchable_0"]);
var isNameSearchable = Convert.ToBoolean(Request["bSearchable_1"]);
var isAddressSearchable = Convert.ToBoolean(Request["bSearchable_2"]);
var isTownSearchable = Convert.ToBoolean(Request["bSearchable_3"]);
The example configuration used in this article has the isIDSearchable variable set to false, while other variables are set to true. Values that are sent to the server depend on the aoColumns setting in the database initialization function. The problem with server-side filtering might be a big number of AJAX requests sent to the server. The DataTables plug-in sends a new AJAX request to the server each time the user changes a search keyword (e.g., types or deletes any character). This might be a problem since the server needs to process more requests although only some of them will really be used. Therefore, it would be good to implement some delay function where the request will be sent only after some delay (there is an example of the fnSetFilteringDelay function on the DataTables site).
isIDSearchable
false
fnSetFilteringDelay
Pagination
Another functionality that is added by the DataTables plug-in is the ability to perform paging on the displayed records. DataTables can add either Previous-Next buttons or standard paging numbers. also it enables you to change the number of the records that will be displayed per page using the drop-down. Drop-down and pagination links are shown on the following figure:
In server-side mode, each time the user clicks on a paging link, the DataTables plug-in sends information about the current page and the page size to a server-side URL that should process the request. The AjaxHandler method that processes paging should be modified to use information sent in the request as shown in the example:
public ActionResult AjaxHandler(jQueryDataTableParamModel param)
{
var allCompanies = DataRepository.GetCompanies();
IEnumerable<Company> filteredCompanies = allCompanies;
var displayedCompanies = filteredCompanies
.Skip(param.iDisplayStart)
.Take(param.iDisplayLength););
}
This example is similar to the previous one, but here we use the param.iDisplayStart and param.IDisplayLength parameters. These are integer values representing the starting index of the record that should be shown and the number of results that should be returned.
param.iDisplayStart
param.IDisplayLength
The last functionality that will be explained in this article is sorting results by column. The DataTables plug-in adds event handlers in HTML columns so the user that can order results by any column. DataTables supports multi-column sorting too, enabling user to order results by several columns, pressing the SHIFT key while clicking on the columns. DataTables adds event handlers to the column heading cells with direction arrows as it is shown on the following figure:
Each time user clicks on the column, DataTables plugin sends information about the column and sort order direction (ascending or descending). To implement sorting, AjaxHandler should be modified to use information about the column that should be used for ordering, as shown in the example:
public ActionResult AjaxHandler(jQueryDataTableParamModel param)
{
var allCompanies = DataRepository.GetCompanies();
IEnumerable<Company> filteredCompanies = allCompanies;
var sortColumnIndex = Convert.ToInt32(Request["iSortCol_0"]);
Func<Company, string> orderingFunction = (c => sortColumnIndex == 1 ? c.Name :
sortColumnIndex == 2 ? c.Address :
c.Town);
var sortDirection = Request["sSortDir_0"]; // asc or desc
if (sortDirection == "asc")
filteredCompanies = filteredCompanies.OrderBy(orderingFunction);
else
filteredCompanies = filteredCompanies.OrderByDescending(orderingFunction););
There is an assumption that the server-side knows which fields are sortable. However, if this information is not known or it can be dynamically configured, DataTables sends all the necessary information in each request. Columns that are sortable are sent in an array of request parameters called bSortable_0, bSortable_1, bSortable_2, etc. The number of parameters is equal to the number of columns that can be used for sorting which is also sent in the iSortingCols parameter. In this case, the name, address, and town might be sortable on the client side, so the following code determines whether they are actually sortable or not:
bSortable_0
bSortable_1
bSortable_2
iSortingCols
var isNameSortable = Convert.ToBoolean(Request["bSortable_1"]);
var isAddressSortable = Convert.ToBoolean(Request["bSortable_2"]);
var isTownSortable = Convert.ToBoolean(Request["bSortable_3"]);
These variables can be added in the conditions of the ordering function, creating a configurable sort functionality.
DataTables plugin enables multicolumn sorting by defualt. If you hold SHIFT key and click on several heading column cells, table will be ordered by first column then by second etc. On the following figure is shown how rows in the table are sorted by the first three columns at the same time.
This is directly implemented in the client-side mode; however, in the server-side processing mode you will need to implement logic that will order records by several columns it in the controlller.
When several columns are selected for sorting, for each column that should be sorted DataTables sends in the Ajax request pairs iSortCol_0, sSortDir_0, iSortCol_1, sSortDir_1, iSortCol_2, sSortDir_2, where each pair contains position of the column that should be ordered and sort direction. in the previous code samle i have used only iSortCol_0 and sSortDir_0, because I have assumed that only single column sorting is used.
Multi-column sorting code is similar to the code shown in the previous example but you will need to put several ordering functions for each column, and apply OrderBy().ThenBy().ThenBy() chain of functions. Due to the specific nature of this code and complexity I have not implemented it here. Note that if this is a requirement more easier solution would be to use dinamically generated SQL Query where you will concatenate these columns and sorting directions in the "ORDER BY" clause. Linq is great and clean code for presentation adn maintenence; howerer, in some situations where you need too much customization, you need to go to lower level functionalities.
This article represents a step-by-step guideline for integrating the jQuery DataTables plug-in into server-side code. It shows how the standard DataTables plug-in that, by default, works with client-side data can be configured to take all the necessary data from the server via AJAX calls. The server-side code used in this example is a pure LINQ query set performed on an in-memory collection of objects. However, in a real application, we should use some data access components such as Linq2Sql, Entity Framework, Stored Procedures, WCF services, or any other code that takes data and performs sorting, paging, and filtering. As these data access samples are out of the scope of this article, they are not used in the example.
A complete example with controller action where are merged all functionalities described in the article can be downloaded from the link above. This is a project created in Visual Web Developer 2010, and you will need to have installed ASP.NET MVC with Razor. If you are not using Razor, it is not a big problem - I recommend that you copy some files into your project (Controller, Model, JavaScript's) and modify them if needed.
This article is just a first part in the serie about using the JQuery DataTables plugin in ASP.NET applications. Other parts in this serie. | http://www.codeproject.com/Articles/155422/jQuery-DataTables-and-ASP-NET-MVC-Integration-Part?fid=1609174&df=90&mpp=25&sort=Position&spc=Relaxed&select=4238691&tid=4262809 | CC-MAIN-2014-23 | en | refinedweb |
Code. Collaborate. Organize.
No Limits. Try it Today.
This article describes a method to utilize WCF self hosted services to be able to serve web 2.0 user interface as well as web services.
By using this method tools or applications can be built using web 2.0 technologies like HTML5, CSS3 with self hosted WCF service as backend business layer. Thus developers can benefit from advanced JavaScript libraries like JQuery or Underscore, etc.. to build user interface. This eliminates need to install .NET Framework on client machines.
Traditionally, self hosted WCF services has user interface built using WinForms and WPF technologies. And to be able to use browser as UI platform ASP.NET and IIS become hard dependencies. For tools and applications which typically are executed out of a single machine or intranet with limited users, IIS is considered as an overkill. Thus browser as a UI platform with HTML5, CSS3 and backed by powerful JavaScript engines is a very promising option for self hosted WCF services.
Initial versions of WCF could only work with SOAP messages. With bindings like WebHttpBinding from .NET 3.5 onwards WCF could offer direct support to consume web services from Ajax calls from JavaScript. However consuming dataformats like JSON is still not so much so an out of the box experience. Effort to write DataContracts for every argument and return types is quite high. And with other data formats like XML, client end couldn't directly benefit from JavaScript libraries due to the extra step involved to convert from XML to JSON.
WebHttpBinding
DataContract
This method considers 'Stream' as input and out type for UI interfacing operations. It also supports basic authentication which can be extended for advanced usage. And no customization is done at any layer and just works based on out of box features of WCF in .NET 3.5. This can also be used with .NET 4.0. However JSON.Net is used to format json objects.
Stream
out
Download and build the source. You can use Visual Studio 2010 or VS command line. (Code is built for .NET 4.0, but can be used with .NET 3.5 also.)
All these operations are run out of a self hosted WCF service. And this demonstrated features like authentication, fetching static files, getting as well as setting and data. Following sections walk through code.
Main class just starts a WCF service using WebHttpBinding and WebServiceHost.
WebServiceHost
class Program
{
static void Main(string[] args)
{
string baseAddress = "http://" + Environment.MachineName + ":2011/";
using (WebServiceHost host =
new WebServiceHost(typeof(WebApp), new Uri(baseAddress)))
{
WebHttpBinding binding = new WebHttpBinding();
host.AddServiceEndpoint(typeof
(IWCFWebApp01), binding, "").Behaviors.Add(new WebHttpBehavior());
host.Open();
... other lines left for brevity
}
}
}
Service contract defines a method 'Files' to serve all static HTML files, another method 'Links' serves all linked files like JavaScript, stylesheets and data. Other resources like login. logout, States and State are service operations. Observable point here is 'Stream' data type for input as well as output.
Files
static
Links
[ServiceContract]
public interface IWCFWebApp01
{
[OperationContract, WebGet(UriTemplate = "/{resource}.{extension}")]
Stream Files(string resource, string extension);
[OperationContract, WebGet(UriTemplate = "/{path}/{resource}.{extension}")]
Stream Links(string path, string resource, string extension);
[OperationContract, WebInvoke(Method = "POST", UriTemplate = "/login")]
Stream Login(Stream request);
[OperationContract, WebInvoke(Method = "POST", UriTemplate = "/logout")]
Stream Logout(Stream request);
[OperationContract, WebGet(UriTemplate = "/states")]
Stream States();
[OperationContract, WebInvoke(Method = "POST", UriTemplate = "/state")]
Stream State(Stream request);
}
Now coming to service implementation. As this method is primarily intended for self hosted WCF services, singleton instance with concurrent threads is good enough. Consider sessions as applicable. But unlike IIS hosted services, self hosted services would normally serve limited users and thus default concurrency is good enough. And on functional line constructor is just loading data onto local member.
[ServiceBehavior(InstanceContextMode =
InstanceContextMode.Single,ConcurrencyMode=ConcurrencyMode.Multiple)]
public class WebApp : IWCFWebApp01
{
JObject states;
public WebApp()
{
if (states==null)
states = JObject.Parse(File.ReadAllText("web\\data\\states.json"));
}
... other lines left for brevity
}
Now that the server is running, when user tries to access for the first time, several HTM, CSS and JavaScript files are served. These are handled by methods 'Files' and 'Links'. Links are files referred in index.htm in head section like JQuery. And in 'Files' method, different types of files are picked up from separate folders based on extension. Switch cases can be extended based types of files.
Switch
public Stream Links(string path, string resource, string extension)
{
... other lines left for brevity
}
public Stream Files(string resource, string extension)
{
switch (extension)
{
case "htm":
... other lines left for brevity
case "js":
... other lines left for brevity
}
}
When user makes a login request, basic authentication token is sent in standard header "Authorization". That is validated in a separate method 'Authenticate' described later. Also username is sent as JSON object in request stream which is parsed into JSON object using JSON.Net library. Logout method is similar to login.
Authorization
Authenticate
public Stream Login(Stream request)
{
if (!Authenticate()) return null;
... other lines left for brevity
JObject o = JObject.Parse(data);
}
When user clicks on 'States' request reaches the following method. As this resource doesn't have any extension, request will not go through 'Files' method. Here request is authenticated and data is sent from member variable.
States
public Stream States()
{
if (!Authenticate()) return null;
WebOperationContext.Current.OutgoingResponse.ContentType = "application/json";
return new MemoryStream(Encoding.ASCII.GetBytes(states.ToString()),false);
}
When user does a modification and clicks on 'Update', the following method would be invoked. This parses state id and update class member variable and returns updated list back to client.
Update
public Stream State(Stream request)
{
... other lines left for brevity
JObject data = JObject.Parse(new string(buffer));
int id = ((int)data["id"]) -1;
states["states"][id]["visited"] = true;
return States();
}
Authentication methods which require authorization invoke the following method:
public bool Authenticate()
{
string userName = "user";
string password = "pass";
string basicAuthCode = Convert.ToBase64String
(Encoding.ASCII.GetBytes (string. Format ("{0}: {1}", userName, password)));
string token = WebOperationContext.Current.IncomingRequest.Headers["Authorization"];
if (token.Contains(basicAuthCode))
{
return true;
}
else
{
WebOperationContext.Current.OutgoingResponse.StatusCode =
HttpStatusCode.Unauthorized;
return false;
}
}
Client code is placed in separate folder by name 'web'. At the root of this folder, all static HTM files are placed. And separate sub-folders are included for images, JavaScript and stylesheets. These are referred from 'Files' method in server code based on extension.
Client follows Single Page Application design. Thus only 'index.htm' is a full HTML page. Other HTML files are filled into 'content' division using Ajax calls as shown below for states:
content
function StatesPage () {
this.loadStatesPage = function (data) {
content = undefined;
$.each (data, function (index, value) {
if (data[index]["id"]=="content") {
content = data[index].innerHTML;
$("#content")[0].innerHTML = content;
$("#b_update")[0].onclick = updateStates;
loadStatesTable();
}
});
if (content == undefined) {alert("Failed to load page: Content missing"); return;}
}
... other lines left for brevity
}
Authentication: Client side authentication token is in login class. This token is added in header section in 'beforeSend' function of each call after login. Other code in client requires understanding about Jquery, JavaScript and Ajax concepts which are well explained on the web.
beforeSend
If windows authentication is required, service host can be customized.
More structured JavaScript libraries with MVC architecture can also be used without making any change to server side code.
Consider using JQuery UI plugins for common look and feel.
As UI is browser based, extending UI for handheld devices becomes quite easy.
This article, along with any associated source code and files, is licensed under The MIT License
public bool Authenticate()
{
return true;
window.onload = function () {
auth = new LoginPage();
//auth.loadLoginPage();
$('#a_states')[0].onclick = loadStates;
$('#a_about')[0].onclick = loadAbout;
loadStates();
}
public interface IWCFWebApp01
{
[OperationContract, WebGet(UriTemplate = "/")]
Stream IndexFile();
public Stream IndexFile()
{
WebOperationContext.Current.OutgoingResponse.ContentType = "text/html";
string fileName = "web\\states.htm";
return new FileStream(fileName, FileMode.Open);
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/229461/Devloping-web-user-interface-for-self-hosted-WC?msg=4332838 | CC-MAIN-2014-23 | en | refinedweb |
Opened 3 years ago
Closed 9 months ago
#16919 closed New feature (fixed)
Pass user to set_password_form in GET requests
Description
SetPasswordForm is being passed None on GET requests even though there is always a user available at that point. This patch passes user, so you can use it in the form constructor for whatever - e.g. populate initial with values that depend on the user involved.
Attachments (2)
Change History (9)
Changed 3 years ago by Jaime Irurzun <jaime.irurzun@…>
comment:1 Changed 3 years ago by aaugustin
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 3 years ago by jaimeirurzun
I was also concerned about the security implications that this patch might have when I wrote it, but given this only applies to the case in which the token has already been validated, I can't think of any security hole.
Basically I have a custom SetPasswordForm in which I give the user the opportunity to update a few fields from his profile that will be used in the password reset logic, so I want to fill the initial values with his current data, for which I need the user object.
comment:3 Changed 3 years ago by aaugustin
- Needs tests set
- Triage Stage changed from Unreviewed to Accepted
comment:4 Changed 3 years ago by ejucovy
- Cc ethan.jucovy@… added
I have another use case for this: rendering the user's name in the registration/password_reset_confirm.html template.
Currently the password_reset_confirm view does not provide "user" as a template context variable, nor even "uidb36" and "token". Since the form also doesn't have the user object stored on a GET request, this means that there's no way for the template to say "{% if validlink %} Hello, {{ user.username }} -- reset your password here {% endif %}" -- short of forking the view, or some pretty hacky middleware that re-parses the request URL and re-fetches the user from the given uid+token.
If this patch were accepted, that would be possible, like so: "{% if validlink %} Hello, {{ form.user.username }} -- reset your password here {% endif %}"
I see that the "needs_tests" flag is set on this ticket .. what sort of test would be required for this patch to be merged?
Changed 2 years ago by ejucovy
comment:5 Changed 2 years ago by ejucovy
- Needs tests unset
I've attached a new version of the patch including auth.views tests that double as demonstration of a use case for this behavior.
comment:6 Changed 10 months ago by anonymous
Another use case for this:
I can add "security question/answer" that user picks when registering and extend SetPasswordForm with CharField labeled with question user picked.
comment:7 Changed 9 months ago by Tim Graham <timograham@…>
- Resolution set to fixed
- Status changed from new to closed
Technically, the patch works.
However, I can't figure out a practical use case for prepopulating a password field that doesn't have security issues. I'd like to make sure this change doesn't encourage bad practices.
Could you explain what you're trying to achieve? | https://code.djangoproject.com/ticket/16919 | CC-MAIN-2014-23 | en | refinedweb |
Startup functionality is common to all of the samples and is shared in the project SampleBase. Open the main file PhysXSample.cpp and navigate to the function onInit, which the sample calls on startup to initialize PhysX.
Note that the file includes the entire PhysX API in a single header. You may also selectively include just the headers you need, but PxPhysicsAPI.h includes everything to help you get started faster:
#include "PxPhysicsAPI.h"
First, create a PxFoundation object:
static PxDefaultErrorCallback gDefaultErrorCallback; static PxDefaultAllocator gDefaultAllocatorCallback; mFoundation = PxCreateFoundation(PX_PHYSICS_VERSION, gDefaultAllocatorCallback, gDefaultErrorCallback); if(!mFoundation) fatalError("PxCreateFoundation failed!");
Every PhysX module requires a PxFoundation instance to be available. The required parameters are a version ID, an allocator callback and an error callback. PX_PHYSICS_VERSION is a macro predefined in our headers to enable PhysX to check for a version mismatch between the headers and the corresponding SDK DLLs. Usually, the implementations of the allocator callback and error callback are specific to the application, but PhysX provides default implementations that make it easy to get started:
More information about the allocator and error callbacks can be found in the section The PhysX API. The actual sample code supports an advanced memory allocator that tracks allocations instead of the default, but we have omitted that detail here.
An optional profile zone manager enables the performance profiling capabilities of the PhysX Visual Debugger:
mProfileZoneManager = &PxProfileZoneManager::createProfileZoneManager(mFoundation); if(!mProfileZoneManager) fatalError("PxProfileZoneManager::createProfileZoneManager failed!");
Now create the top-level PxPhysics object:
bool recordMemoryAllocations = true; mPhysics = PxCreatePhysics(PX_PHYSICS_VERSION, *mFoundation, PxTolerancesScale(), recordMemoryAllocations, mProfileZoneManager ); if(!mPhysics) fatalError("PxCreatePhysics failed!");
Again, the version ID has to be passed in. The PxTolerancesScale parameter makes it easier to author content at different scales and still have PhysX work as expected, but to get started simply pass a default object of this type. The recordMemoryAllocations parameter specifies whether to perform memory profiling.
The PhysX cooking library provides utilities for creating, converting, and serializing bulk data. Depending on your application, you may wish to link to the cooking library in order to process such data at runtime. Alternatively you may be able to process all such data in advance and just load it into memory as required. Initialize the cooking library as follows:
mCooking = PxCreateCooking(PX_PHYSICS_VERSION, *mFoundation, PxCookingParams()); if (!mCooking) fatalError("PxCreateCooking failed!");
The PxCookingParams struct configures the cooking library to target different platforms, use non-default tolerances or produce optional outputs.
The cooking library generates data though a streaming interface. In the samples, implementations of streams are provided in the PxToolkit library to read and write from files and memory buffers.
The extensions library contains many functions that may be useful to a large class of users, but which some users may prefer to omit from their application either for code size reasons or to avoid use of certain subsystems, such as those pertaining to networking. Initializing the extensions library requires the PxPhysics object:
if (!PxInitExtensions(*mPhysics)) fatalError("PxInitExtensions failed!");
When linking PhysX as a static library on memory constrained platforms, it is possible to avoid linking the code of some PhysX features that are not always used in order to save memory. Currently the optional features are:
- Articulations
- Height Fields
- Unified Height Fields
- Cloth
- Particles
If your application requires a subset of this functionality, it is recommended that you call PxCreateBasePhysics as opposed to PxCreatePhysics and then manually register the compoments you require. Below is an example that registers all of the options with the exception of the unified height fields:
physx::PxPhysics* customCreatePhysics(physx::PxU32 version, physx::PxFoundation& foundation, const physx::PxTolerancesScale& scale, bool trackOutstandingAllocations, physx::PxProfileZoneManager* profileZoneManager) { physx::PxPhysics* physics = PxCreateBasePhysics(version, foundation, scale, trackOutstandingAllocations, profileZoneManager); if(!physics) return NULL; PxRegisterArticulations(*physics); PxRegisterHeightFields(*physics); return physics; }
From PhysX 3.3, we introduce support unified height field collision detection. This approach shares the collision detection code between meshes and height fields such that height fields behave identically to the equivalent terrain created as a mesh. This approach facilitates mixing the use of height fields and meshes in the application with no tangible difference in collision behavior between the two approaches. To enable this approach, you must dynamically register it as demonstrated:
PxRegisterUnifiedHeightFields(*physics);
This dynamic registration should be performed instead of the call to:
PxRegisterHeightFields(*physics);
Note that dynamic registration will only save memory when linking PhysX as a static library, as we rely on the linker to strip out the unused code.
The PhysXCommon dll is marked as delay loaded inside of the PhysX and PhysXCooking project. So it is possible to have a delay loaded PhysXCommon, PhysX and PhysXCooking dlls. If you need to load a diffent dll, it is possible to create a PxDelayLoadHook and define the name of PhysXCommon dll that should be loaded by PhysX dll and PhysXCooking dll,please see example:
class SampleDelayLoadHook: public PxDelayLoadHook { virtual const char* GetPhysXCommonDEBUGDllName() const { return "PhysX3CommonDEBUG_x64_Test.dll"; } virtual const char* GetPhysXCommonCHECKEDDllName() const { return "PhysX3CommonCHECKED_x64_Test.dll"; } virtual const char* GetPhysXCommonPROFILEDllName() const { return "PhysX3CommonPROFILE_x64_Test.dll"; } virtual const char* GetPhysXCommonDllName() const { return "PhysX3Common_x64_Test.dll"; } } gDelayLoadHook;
Now the hook must be set to PhysX and PhysXCooking:
PxDelayLoadHook::SetPhysXInstance(&gDelayLoadHook); PxDelayLoadHook::SetPhysXCookingInstance(&gDelayLoadHook);
To dispose of any PhysX object, call its release() method. This will destroy the object, and all contained objects. The precise behavior depends on the object type being released, so refer to the reference guide for details. To shut down physics entirely, simply call release() on the PxPhysics object, and this will clean up all of the physics objects:
mPhysics->release();
Do not forget to release the foundation object as well, but only after all other PhysX modules have been released:
mFoundation->release(); | https://developer.nvidia.com/sites/default/files/akamai/physx/Docs/Startup.html | CC-MAIN-2014-23 | en | refinedweb |
IRootDesigner Interface
Provides support for root-level designer view technologies.
Assembly: System (in System.dll)
The IRootDesigner type exposes the following members.
A root designer is the designer that is in the top position, or root, of the current design-time document object hierarchy. A root designer must implement the IRootDesigner interface. A root designer typically manages the background view in designer view mode, and usually displays the controls within the base container of the current design time project.
The following code example demonstrates a IRootDesigner implementation associated with a sample user control. This IRootDesigner implementation displays a control for the background view in designer view by overriding the GetView method. You need to add a reference to the System.Design assembly to compile the example.
To use this example, add the source code to a project and show the RootViewSampleComponent in designer view to display the custom root designer view.
using System; using System.Collections; using System.ComponentModel; using System.ComponentModel.Design; using System.Diagnostics; using System.Drawing; using System.Windows.Forms; using System.Windows.Forms.Design; namespace SampleRootDesigner { // This sample demonstrates how to provide the root designer view, or // design mode background view, by overriding IRootDesigner.GetView(). // This sample component inherits from RootDesignedComponent which // uses the SampleRootDesigner. public class RootViewSampleComponent : RootDesignedComponent { public RootViewSampleComponent() { } } // The following attribute associates the SampleRootDesigner designer // with the SampleComponent component. [Designer(typeof(SampleRootDesigner), typeof(IRootDesigner))] public class RootDesignedComponent : Component { public RootDesignedComponent() { } } public class SampleRootDesigner : ComponentDesigner, IRootDesigner { // Member field of custom type RootDesignerView, a control that // will be shown in the Forms designer view. This member is // cached to reduce processing needed to recreate the // view control on each call to GetView(). private RootDesignerView m_view; // This method returns an instance of the view for this root // designer. The "view" is the user interface that is presented // in a document window for the user to manipulate. object IRootDesigner.GetView(ViewTechnology technology) { if (technology != ViewTechnology.Default) { throw new ArgumentException("Not a supported view technology", "technology"); } if (m_view == null) { // Some type of displayable Form or control is required // for a root designer that overrides GetView(). In this // example, a Control of type RootDesignerView is used. // Any class that inherits from Control will work. m_view = new RootDesignerView(this); } return m_view; } // IRootDesigner.SupportedTechnologies is a required override for an // IRootDesigner. Default is the view technology used by this designer. ViewTechnology[] IRootDesigner.SupportedTechnologies { get { return new ViewTechnology[] {ViewTechnology.Default}; } } // RootDesignerView is a simple control that will be displayed // in the designer window. private class RootDesignerView : Control { private SampleRootDesigner m_designer; public RootDesignerView(SampleRootDesigner designer) { m_designer = designer; BackColor = Color.Blue; Font = new Font(Font.FontFamily.Name, 24.0f); } protected override void OnPaint(PaintEventArgs pe) { base.OnPaint(pe); // Draws the name of the component in large letters. pe.Graphics.DrawString(m_designer.Component.Site.Name, Font, Brushes.Yellow, ClientRectangle); } } } }. | http://msdn.microsoft.com/en-us/library/system.componentmodel.design.irootdesigner(v=vs.100).aspx | CC-MAIN-2014-23 | en | refinedweb |
Patent application title: COMPUTER-BASED METHOD FOR TEAMING RESEARCH ANALYSTS TO GENERATE IMPROVED SECURITIES INVESTMENT RECOMMENDATIONS
Inventors:
James Tanner (Boulder, CO, US)
Assignees:
WALL STREET ON DEMAND
IPC8 Class: AG06Q4000FI
USPC Class:
705 36 R
Class name: Automated electrical financial or business practice or management arrangement finance (e.g., banking, investment or credit) portfolio selection, planning or analysis
Publication date: 2009-01-01
Patent application number: 20090006268
Abstract:.
Claims:
1. A computer-based method for processing and combining investment
recommendations from research providers such as stock analysts,
comprising:providing a server running a research team management module
on a digital communications network;providing identifiers for a set of
research providers to a client node linked to the communications
network;with the research team management module, generating a research
team comprising two or more of the research providers based on selections
received from the client node;assigning team rules with the research team
management module to the research team defining an algorithm for
processing recommendations of research providers on the research
team;accessing recommendations of the research providers on the research
team for a security;generating a team recommendation for the security by
processing the accessed recommendations using the algorithm defined by
the team rules; andreporting the team recommendation to the client node.
2. The method of claim 1, wherein the algorithm comprises combining the accessed recommendations after applying weights to the accessed recommendations that are defined in the team rules for both positive and negative recommendations for each of the research providers on the research team.
3. The method of claim 2, wherein the weights are user-selected based on input received from the client node and wherein the weights for the positive recommendations differ from the weights for the positive recommendations of at least one of the research providers.
4. The method of claim 1, wherein the algorithm comprises determining whether more than half of the research providers agree on a positive or a negative recommendation and if so, choosing the agreed upon positive or negative recommendation as the team recommendation.
5. The method of claim 1, wherein the algorithm comprises determining from the accessed recommendations whether all of the research providers on the research team have provided a positive or a negative recommendation for the security and if so, providing the positive or negative recommendation as the team recommendation.
6. The method of claim 1, wherein the algorithm comprises determining from the accessed recommendations whether all of the research providers on the research team have provided a positive recommendation for the security and if so, providing the positive recommendation as the team recommendation.
7. The method of claim 6, wherein the algorithm further comprises determining if any of the accessed recommendations is a negative recommendation, and if so, providing the negative recommendation as the team recommendation.
8. The method of claim 1, further comprising running a performance analytics module to determine historic performance of the set of research providers for recommending securities and delivering at least a portion of the determined historic performance to the client node prior to the generating of the research team.
9. The method of claim 8, further comprising running the performance analytics module to determine historic performance of the research team including accessing prior recommendations of the research providers on the research team over a time period, applying the team rules to the prior recommendations to generate historic team recommendations, and processing security pricing information with the historic team recommendations and the method further comprising reporting the historic performance of the research team to the client node with a comparison to the historic performance of the set of research providers.
10. The method of claim 8, wherein the historic performance of the set of research providers is determined based on a performance analysis methodology user-selected from a set of methodologies and wherein the selections for the two or more research providers for the research team comprise at least one request for a highest performer based on one of the methodologies.
11. The method of claim 1, further comprising repeating the accessing, the generating, and the reporting for a set of securities and yet further comprising monitoring for modifications of the recommendations of the research providers and when detected generating an alert to the client node with a new recommendation created by repeating the team recommendation generating.
12. A method for forming a team of individual research providers to generate stock recommendations, comprising:running a user interface on a client node linked to a network;displaying performance information based on analysis of prior stock investment recommendations for a plurality of research providers in the user interface;receiving a selection of a set of the research providers for a research team;generating team rules for combining stock investment recommendations from the set of research providers on the research team into team investment recommendations;running an analytic module on a server to determine investment performance for the research team based for a period of time based on the team investment recommendations for the period of time for a set of stocks; andreporting the investment performance to the client node along with individual performance information for the period of time for the set of research providers on the research team.
13. The method of claim 12, further operating the analytic module to determine the performance information for the plurality of research providers based on a user-selected performance analysis methodology.
14. The method of claim 12, wherein the team investment recommendations for the set of stocks comprise positive, neutral, or negative recommendations and the team rules comprise aggregation rules for combining the positive, neutral, or negative recommendations of the set of research providers on the research team.
15. The method of claim 14, wherein the team rules generating comprises receiving from the client node weights to apply to each positive recommendation and each negative recommendation of each of the research providers on the research team and wherein the aggregation rules comprise combining the recommendations after applying the weights.
16. The method of claim 12, wherein team rules are selected from the group of methods for combining team member recommendations consisting of an averaging methodology, a majority methodology, a consensus methodology, and a unanimous-to-buy-one-to-sell methodology.
17. A system for providing a virtual security analyst providing a single investment recommendation for each security in a set of securities based on recommendations of a set of research providers, comprising:means for enabling a user to specify two or more of the research providers to include on a research team;means for enabling a user to specify a set of rules for combining positive and negative recommendations for securities generated by the research providers on the research team;means for determining for a set of securities historic performance of individual ones of the research providers on the research team and of the research team based on the set of rules and prior positive and negative recommendations of the research providers; andmeans for reporting the historic performances to a user.
18. The system of claim 17, wherein the set of rules includes separate weight values assigned to negative recommendations and to positive recommendations for each of the research providers on the research team.
19. The system of claim 17, means for enabling a user to define a plurality of securities for coverage by the research team, means for determining positive and negative recommendations of the research providers on the research team for the plurality of securities, means for generating team recommendations by processing the determined positive and negative recommendations using the set of rules, and means for reporting the team recommendations.
20. The system of claim 19, means for generating updated team recommendations in response to modifications of one or more of the determined positive and negative recommendations and means for alerting a user to the generated updated team recommendations.
21. The system of claim 17, wherein the historic performances reported to the user exclude the prior positive and negative recommendations of the research providers, whereby the research team is validated without release of recommendation information of the research providers on the research team.
Description:
BACKGROUND OF THE INVENTION
[0001]1. Field of the Invention
[0002]The present invention relates, in general, to financial data analysis methods and systems, and, more particularly, to computer software, hardware, and computer-based methods for analyzing research data, including buy, sell, hold, and other recommendations for stocks, generated by security or stock analysts or computer generated to provide consumers of such research data techniques for aggregating the data to improve investing performance.
[0003]2. Relevant Background
[0004]There are hundreds of firms who have as their business to provide buy, hold, and sell recommendations on individual securities--"Opinionated Research". There are also many firms that help the potential customers of such research recommendation determine which providers are the best--"Performance Measurement Firms".
[0005]Securities or stock analysts or "research analysts" are one of the main resources for information on companies and the desirability of investing in the companies. Research analysts attempt to predict future events such as earnings well in advance of the time the earnings are announced and may use these predictions and other information such as long-term prospects to provide investment recommendations, sector rating, growth rate and price targets. The role of the security analyst is generally well-known and includes issuing earnings estimates for securities, other financial estimates concerning future economic events, recommendations on whether investors should buy, sell, or hold financial instruments, such as equity securities, and other predictions. Security analyst estimates provided in research reports may include, but are not limited to, quarterly and annual earnings estimates for companies whether or not they are traded on a public securities exchange.
[0006]While research reports provide large amounts of useful information, there are numerous challenges facing a consumer of the estimates and recommendations, such as a manager of a mutual fund or an individual investor. Analysts typically summarize their search reports with a brief recommendation on the action an investor should take regarding a particular investment or stock. The various research analysts, who may be individual analysts or firms, often will differ in their recommendation for a particular company and its stock. For example, one research analyst may provide a buy recommendation while another firm is providing a sell recommendation. Further, every firm may use its own rating system to provide its recommendations with one firm using a five-point scale of buy, outperform, neutral, underperform, or avoid while another uses a three-point scale of buy, hold, or sell. Yet another firm may use a similar number of recommendations but use differing labels for their recommendations such as a five-point scale of recommended list, trading buy, market outperformer, market perform, and market underperformer. It may be difficult to understand the meaning of these various recommendations and to compare recommendations from different research analysts. As a result, products have been developed to normalize or standardize the various recommendation scales to allow the recommendations to be compared and, in some cases combined, for review by consumers.
[0007]The quality of an analyst's recommendations may also vary significantly. Several services have been developed to determine the past performance of research analysts and to provide rankings of their performance relative to their peers. For example, ranking services exist that provide rankings of analysts based on their ability to predict earnings for companies. Other services provide rankings of analysts by analyzing their research reports to determine whether their recommendations such as buy, hold, and sell have been accurate within a particular stock sector. Most analysts have strengths and weaknesses such as being better suited at picking stocks to sell, at predicting earnings but not predicting larger economic trends, analyzing stock values for certain sized companies, analyzing technology or durable goods, or the like, and these strengths and weaknesses cause the analysts to provide more accurate data in particular investment environments and less accurate data in others. Currently, the "performance measurement" companies are focused on picking the "best" research providers for their needs. They do not give the research buyer a way to explore the possibility of research provider combinations. Currently the "research aggregators" have taken in different research providers' data. The aggregators generally analyze the analyst performance and/or the research provider's performance. Aggregators use analyst's estimates accuracy and the performance of their ratings history accuracy to identify the top performing analysts and research providers.
[0008]The research aggregators are focused on the best analyst at estimates or ratings accuracy for a stock, sector or geography or the research provider and their performance. This is an isolated way of looking at research and is not necessarily the best way to research securities, nor does this satisfy the needs of the head of research or the research analyst. The research analyst purchases a "mosaic" of research or inputs to their investment process and it would be valuable to look at the combinations of data in order to identify top performing "research teams". No aggregator looks at the performance of combinations of research providers or creates virtual or synthetic research teams, using a combination of research providers to form a team based on a series of rules that the analyst sets.
[0009]There are nearly two hundred research firms that provide research on stocks within the United States alone, and at any one time, nearly one hundred of these analysts may be following a particular company's stock. As a result, it is very difficult to select among the numerous analysts to determine whose recommendations to follow at any particular time and for any particular stock, sector or market. In an attempt to address this problem, a number of services collect recommendations from a large portion of the analyst firms. Some services combine the recommendations of the analysts such as in a chart that displays the average recommendation of all the recommendations for a particular stock. This is often called the "consensus" recommendation, but it is actually a relatively naive average that places an equal weight on all analysts regardless of their past performance or industry rankings. Also, the average recommendation of all analysts is often not a unanimous consensus because a buy or positive recommendation often will include a number of sell or negative recommendations (and vice versa for a sell recommendation). Some performance measurement firms, like Starmine, create a more sophisticated average estimate and recommendation by giving contributing analysts with a better track record, more weight than contributing analysts with a worse track record Even so, these existing tools are focused on allowing the research consumer to find the best research analysts for a particular stock, or to create a stock-by-stock consensus, but they do not help the research consumer find combinations of providers that would outperform the individual providers.
[0010]With the above issues in mind, it may be useful to further explain the use of much of the securities research data by those in the financial industry. Asset and money managers such as traditional equity managers (e.g., long-only investors), pension funds, hedge funds, banks, and individual investors are generally considered "buy-side" consumers of research reports produced by research analysts They purchase investment research in order to make informed investment decisions including buy, sell, and hold decisions on new and existing investments in stocks of companies. Investment research includes qualitative and quantitative data from independent research analysts or provides and from affiliated research analysts (e.g., "sell-side" analysts with relationships with the firm or company they are analyzing). As noted above, investment research firms often have specialties such as a particular geographic coverage, market capitalization, market sector, or the like.
SUMMARY OF THE INVENTION
[0011]To address the above and other problems, the present invention provides methods and systems for creating combinations of research providers, or "teams". The invention allows the research consumer to explore different combination of providers and analyze how that combination performed relative to the providers themselves or other teams. The system and method involves electing a team of research providers or analysts from a set of such providers and then testing or validating the selected team using historical market and financial data to determine their performance when their recommendations are aggregated according to user-selected weighting and recommendation aggregation rules. The system and method then utilize the research team as a virtual analyst to provide investment recommendations for a user-selected set of securities in an ongoing manner.
[0012]There are many benefits to the investment community behind the research team approach. This analysis can be done without the research consumer seeing the actual recommendations of the research providers, which means the research and the proprietary data of the research provider is protected. This also means that the consumer of research can analyze the research provider's performance and their team performance before purchasing the underlying research from the provider. There is no other system in the market that has a team-based approach to ratings history and performance. Our system is further innovative in that you don't need to purchase the content/research to view the rating history and performance. There is no system that looks at the performance of combination of research providers or creates a virtual or synthetic research provider and tracks its historical performance and treats the virtual or synthetic research provider as a single entity.
[0013]Other customer benefits of the research team approach include a demonstrable alpha generation when using a research team approach to research selection and research purchase. The customer has documented proof of the capability of their research methodology, information sources. This is significant for the customer in helping to satisfy the regulatory requirements of both the FSA and the SEC in justifying their spending on investment research. The research team system helps provide the quantitative basis behind a given research spend.
[0014]Further, the customer can track the performance of the team as easily as tracking the changes of one provider. Changes to estimates, target price, and ratings are tracked on a team basis, rather than simply looking at individual analyst or provider or stock. By tracking the team, rather than simply individual providers, the analyst monitors one virtual team or synthetic team, rather than a handful of individual providers. This simplifies the amount of information the analyst has to digest to inform their investment opinion.
[0015]The concept of utilizing a team of research providers rather than a single provider comes from the inventor's realization that teams often perform better than individuals in making decisions similar to stock recommendations and also because individuals often have weaknesses and strengths that can compliment each other when the team members are selected correctly. For example, one team member may be accurate on buy recommendations while another team member may be accurate on sell recommendations, and weighting and team aggregation rules (e.g., typically not a simple averaging although average weighting may be used in some cases) are used to properly combine the members' recommendations to generate an aggregated or combined recommendation that is more accurate over time and in differing investment environments than either individual In the methods and system of the invention, a team member's recommendations related to their strengths are generally weighted more heavily than their weaknesses such as weighing their positive or negative recommendations more heavily.
[0016]More particularly, a computer-based method is provided for processing and combining investment recommendations of individual research providers (e.g., stock analysts, quantitative models that generate recommendations, and the like) to achieve improved investment performance. The method includes providing a server or computer device that runs a research team management module and that is communicatively linked to a network such as the Internet. A list of individual research providers or identifiers of such providers is provided or displayed on a client node that is linked to the network. The research team management module then may generate a research team that includes two or more of the research providers, and the team members typically are chosen by a user of the client node by entering selections in a user interface such as web page or screen. The method further includes assigning team rules to the research team to define an algorithm or method of processing recommendations from the research providers or team members on the research team. Then recommendations for one or more securities are accessed or retrieved for the research providers on the team and a team recommendation is generated by applying the team rules to the retrieved recommendations. The team recommendation is reported to the client node to assist a user in making investment decisions.
[0017]There are several variables and inputs to creating a team including selecting research team members and requiring the provider to have an opinion in order to be included in the team rating. Another variable or input may include the designation of the rule used to calculate the recommendation and recommendation history; this may include but is not limited to average, majority, consensus, unanimous to buy and one to sell, unanimous to sell and one to buy and unanimous to buy and one to sell but not short. Additional conditions or rules applied to the team include the number of team members who must provide a rating and weightings on attributes such as over weighting a team member's positive or negative ratings. As a function of the rule and weights a user selects, they will impact and change the research team history and performance.
[0018]The algorithm for processing the individual recommendations may include first applying weights to each of the recommendations and then combining or "averaging" the weighted recommendations, with the weights being user-selected to differentiate the strengths of each member of the research team (e.g., by applying differing weights on positive and negative recommendations for an individual provider or differing weights on the various team members). The team rules may also include other aggregation methods such as determining if more than half of the team members have recommended a buy/positive or a sell/negative recommendation and if so, using this majority recommendation as the team recommendation. In some cases, the team rules will call for all to agree to generate a positive or a buy recommendation and allow one team member to cause the team to generate a negative or sell recommendation (e.g., unanimous to buy and one to sell). The method also calls for running a performance analytics module on the server to determine historic performance for recommending securities of the set of research providers and delivering at least a portion of this to the client node for use in selecting team members. The selection of one or more of the team members may be automated or partially automatic as a user can request high-end performers in a particular performance category (e.g., as determined by a particular performance analysis methodology). The method may farther include determining the historic performance of the formed research team by accessing actual prior recommendations of the team members over a particular time period for a select or default set of stocks or securities. This historical team performance can then be reported to the client node along with historic performance data for the individual team members, and a user can then determine if the team members perform better together or apart and adjust the team rules/members as appropriate (e.g., an iterative process may be used to enhance the team results). In addition to such team validation or testing, the research team may be used to track a set of securities going forward and alerts may be generated when one or more of the recommendations of the team members is changed causing the team recommendation for a stock or security to also change.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019]FIG. 1 is a functional block diagram of a computer system or network according to an embodiment of the invention showing use of a virtual securities analyst system, e.g., a server or other computing device to implement software modules or programs and stored digital data to perform the research data analysis functions of the invention;
[0020]FIG. 2 is a flow diagram illustrating an embodiment of research team selection and operation according to an embodiment of the invention such as may be achieved during operation of the system of FIG. 1;
[0021]FIG. 3 is a user interface or screen shot of a browser page generated as part of implementing an embodiment of the invention, e.g., operation of GUI generation module and performance analytics module of FIG. 1, illustrating a user's or a consumer's ability to select among a number of performance analysis methodologies to rate independent research providers relative to their peers and/or market benchmarks;
[0022]FIG. 4 illustrates a user interface or screen shot of a browser page generated as part of an implementation of the invention showing an exemplary performance chart for one performance analysis methodology or rating scheme for independent research providers that shows providers based on their ability to more accurately pick or recommend security buys rather than sells;
[0023]FIG. 5 is a user interface similar to that shown in FIG. 4 illustrating another performance chart for another performance analysis methodology or rating scheme for independent research providers that shows providers rated against their peers based on a batting average of their past recommendations;
[0024]FIG. 6 illustrates a user interface or screen shot of a browser page of a GUI generated as part of an implementation of the invention showing an input window for allowing a user or consumer to provide input to select a research team from a group of independent research providers and to establish rating weights for each of their recommendations and to set team rules for making a team recommendation or to act as a virtual securities analyst providing an aggregated recommendation for a particular security;
[0025]FIG. 7 is a graph illustrating the alpha or differential obtained by use of an exemplary research team as a virtual securities analyst based on their 5-point recommendations over a representative time period;
[0026]FIG. 8 is a graph with explanatory text showing a report of an exemplary research team with the performance chart comparing performance of the research team relative to its three component research providers considered individually;
[0027]FIG. 9 is a data flow diagram illustrating components of a system or computer network of the invention (such as but not limited to the system of FIG. 1) showing data flow and functions of the system during its operation during initial team selection and validation and also during use of the team to obtain ongoing recommendations; and
[0028]FIG. 10 is a system flow diagram similar to that of FIG. 9 showing data flow and functions of a system according to the invention during team selection, team testing, and ongoing recommendation operations.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0029]The present invention is directed to methods and systems for generating and utilizing a research team from a set of securities research providers or analysts to provide a team recommendation for securities such as stocks on a watch or coverage list. In practice, a buy-side analyst such as money or asset manager uses the tools provides by the invention as a "team manager" to help them identity combinations of research providers that perform better as a team than as individuals. Without the tools provided by the invention including the team testing or validation module or process, it would be nearly impossible to select and test such a research team, e.g., an analytics engine in some embodiments may perform 3.6 million data points (calculations) in a minute in order to generate performance ratings or results for individual research providers and for formed research teams. Once a research team is formed, the systems of the invention can track changes in recommendations provided by a research team (e.g., the recommendations of a virtual securities analyst) as easily as tracking changes in recommendations of an individual research provider. While averaging of recommendations may be useful in some applications, custom rules, such as favoring one analyst's or researcher's recommendations for buys over other team members and favoring another analyst's sells, allows the user or customer of the embodiments of the invention to leverage each team member's strengths within the research team and generate an alpha in its stock or securities investments, i.e., an amount of performance that exceeds a particular benchmark that may be determined on a risk-adjusted basis.
[0030]The functions and features of the invention are described as being performed, in some cases, by "modules" that may be implemented as software running on a computing device and/or hardware. For example, the research team selection, testing, and use processes or functions described herein may be performed by one or more processors or CPUs running software modules or programs such as an analytics engine to generate provider performance, a team creation engine to allow a user to select and test a research team, a rules manager, and the like. The methods or processes performed by each module is described in detail below typically with reference to flow charts may be any devices useful for providing the described functions, including well-known data processing and storage and communication devices and systems such as computer devices or nodes typically used in computer systems or networks with processing, memory, and input/output components, and server devices31]The following description begins with a description of one useful embodiment of a computer system or network 100 with reference to FIG. 1 that can be used to implement the research team generation, validation, and use processes of the invention. Representative processes are then discussed in more detail with reference to the method 200 of FIG. 2 with support or more detail provided by the screen shots of a user interface or pages shown in FIGS. 3-6 that may be generated during operation of the system 100 of FIG. 1 or another system according to the invention. The description then proceeds to explain the advantages provided by use of a research team created according to the invention to make investment decisions with reference to the graphs and reports of FIG. 7 and 8. FIG. 9 and 10 provide system and data flow diagrams 900 and 1000 that provide further explanation of the workings of representative systems of the invention including their software modules run on typical servers or other computer devices, e.g., a web server accessible via the Internet or other wired or wireless digital communications network.
[0032]FIG. 1 illustrates a simplified schematic diagram of an exemplary computer system or network 100 and its major components (e.g., computer hardware and software devices and memory devices) that can be used to implement an embodiment of the present invention. As shown, the system 100 includes a virtual securities analyst system 110 that may comprise a server such as a web server or the like that is connected to a digital communications network 104 such as the Internet, an Internet, or the like. Such an arrangement allows client nodes 160 that run web browsers or similar applications to use a user interface or graphical user interface (GUI) 164 to access and interact with the analyst system 110. As shown in FIG. 3-6, a user or operator of the nodes 160 may be provided one or more research team screenshots 168 generated by the system 110 to review performance data on analysts, to select a research team from these analysts, to select a set of stocks or other securities to watch or cover, to obtain recommendations on these stocks from the "virtual" analyst via the recommendations of the team that are combined base on weightings and aggregation rules, and/or to otherwise provide user input and receive output such as the reports shown in FIG. 7 and 8. The connection to the network 104 also allows the analyst system 110 to access server 150 that has memory 152 storing market data 156 such as stock prices and other data from financial markets such as stock exchanges and services that track securities.
[0033]The analyst system 110 includes a processor or CPU 112 that runs a set of software modules (that may be implemented partially or fully with hardware in some cases) to provide its functionality. Specifically, the processor 112 runs a performance analytics module 114 that provides among other functions the ability to analyze the performance of a plurality of research providers or analysts that provide recommendations on securities (e.g., buy, sell, hold, and other recommendations on stocks or other securities). The module 114 may determine such performance and rate each analyst or provider in relation to their peers using a rating methodology or historical performance technique. The invention is not limited to a particular performance analysis technique or methodology 116 with the more important aspect being that a user of the client node 160 is able to see ratings of the providers or analysts such as on a screenshot 168 of GUI 164, in some cases select the methodologies to use to analyze the performance, and to select from the analysts for their research team using the ratings or performance results provided by the analytics module 114. Further, these same or differing methodologies 116 may be used to test a formed research team to determine if the team is able to beat or out-perform individual team members and/or market benchmarks. The studies 116 may be those presently known by those skilled in the financial analysis fields or ones later developed, and in one embodiment, the analytics for determining a research providers performance include: consistently outperforming peers, better at buys than sells, batting average, comparing conviction of rating with return, independent research versus investment bank research, size of research coverage universe versus returns, and comparing type of analysis, philosophy, or research methodology. These methodologies are explained in more detail with reference to FIGS. 2-6, but, again, other methodologies (e.g., performance measurements accepted by the financial industry to identify "best" performing analysts including qualitative measurements, momentum performance measurements, short stock pickers, and the like) may be include in rating methodologies 116 to determine historical performance of an analyst in a variety of financial environments and based on varying benchmarks. Memory 130 is provided in the system 110 and is used to store the research providers' performance 132 determined by the module 114 (e.g., ratings of each research analyst in an available set of analysts based on, for example, their ability to accurately pick stocks to buy or stocks to sell).
[0034]The performance data 132 may also include an identifier or listing for each research provider for who research information including investment recommendations is available. A research team selection module 120 is also run by the processor 112 to enable a user of client node 160 to form teams 136 that are stored in memory 130 and that include two or more of these research providers indicated in performance data 132 or elsewhere in memory 130 (or accessible by processor 112). Selection of a research team 136 via module 120 is an important aspect of the invention as it allows a user to select, such as via GUI 164, two or more research providers or analysts to be members 138 of their team or teams 136, and these members 138 act to provide a set of investment recommendations that are combined to form team recommendations 146 that are also stored in memory 130. The recommendations of the individual members 138 of each team 136 are combined to form team (or virtual securities analyst) recommendations 146 using team rules 140, which typically include weights to be applied to each analyst's recommendations and aggregation rules for determining how to combine the recommendations (as is explained in more detail with reference to FIG. 6). The team rules 140 preferably are selected or adjusted based on input from a user of the client node 160 but also may be set to default values.
[0035]The system 110 further includes a securities selection module 124 that allows a user such as a money/asset manager or independent investor to choose a set of securities or stocks 142 that is stored in memory 130. Then, the system 110 may operate to determine the recommendations 146 of the team (or teams) 136 for this set of securities 142 (e.g., stocks in a mutual fund, stocks being considered for addition or deletion from a portfolio or fund, or the like) and to watch for changes to such recommendations 146 (at which point an alert may be sent to the client node 160 via GUI 164 or via other massaging techniques such as e-mails, text massaging, voice massaging, or the like). In some cases, the set of securities 142 and a particular time period is selected by a user of node 160 prior to determining the research providers performance 132 by the analytics module 114, and this allows a user to determine the performance of the analysts and potential team members based on particular stocks such as stocks in a particular industry, stocks for companies involved in a particular technology or having a particular geographic coverage, or other distinguishing characteristics.
[0036]As will become clear, the research team 136 may also be tested or validated by determining their performance for all covered securities by operating the analytics module 114 or for just the set of securities 142 of interest to a user. If a team 136 does not perform well (e.g., outperform a particular benchmark or better than its members' individual recommendations), the user can provide input to the system 110 via the GUI 164 to modify the team 136 or to create a new team 136 with differing members 138, which can be tested or validated based on a test using historical performance data (e.g., based on past recommendations of the team members 138, combining those recommendations into team recommendations 146, and determining a resulting performance relative to some particular benchmark such as market indexes, individual analysts, or the like). A GUI generation module 128 is also included in the system 110 and run by the processor 112 to generate the GUI 164 and its screen shots or displays 168 and to provide data from memory 130 or other sources to the node 160.
[0037]From the description of the system 100, it will be understood that one of the aspects of the invention is to allow an asset or money manager or other user/operator accessing the system 110 to find the best or an useful combination of research providers or analysts that perform better as a team than as individuals and that even, in some cases, outperform the "star" or higher-performing individual research providers or analysts. Such teams 136 have a set of team rules 140 that may be default rules or be selected by the user/operator of node 160 to cause each of the team members 138 to contribute in a desirable manner, e.g., by having each member play to their strengths as indicated by historic performance measurements and/or ratings against their peers. With application of the team rules 140, the teams 136 can be thought to act somewhat like a committee (or single, virtual security analyst) with each committee or team member 138 providing one vote as to what the team recommendation 146 should be for a particular security.
[0038]In one embodiment, the team selection module 124 is useful when combined with the performance analytics module 114 because a user or operator of the client node 160 can be allowed to model or form a team 136 and test or validate it based on historic recommendations and the resulting team performance but without actually having access to the individual recommendations of the team members on any one stock or security. For example, an asset manager or other user generally operates with a fixed or limited budget for purchasing research from analysts, and they are forced to select a limited number of research providers and pay subscription or other fees for those analysts' information and recommendations. With the present invention, the asset manager can operate the client node 160 before making the purchase decision to model one or more teams 136 and determine their performance on a default set of securities or a set of securities 142 selected by the asset manager using historic market data 156 and prior recommendations regarding those securities by the team members 138 via operation of the analytics module 114 by processor 112. The use of the processor 112 to run the analytics module or engine 116 allows millions of recommendations over selected time periods (e.g., buy and sell recommendations, upgrades, downgrades, and the like) for thousands of securities (e.g., there are over 5,000 stocks available on the exchanges in the United States) to be processed according to the methodologies 116 to determine prior performance of individuals and of a hypothetical or proposed team 136, which would be impractical and nearly impossible without a fairly robust computing device or system.
[0039]After the asset manager identifies a useful team 136, the asset manager may decide to use their budget to purchase rights to the research of the analysts on the team 136 and begin to obtain team recommendations 146 for present investment decisions (i.e., based on current recommendations of the team). Note, the team rules 140 are used to form the "useful" or outperforming team 136 and would typically be used to process current individual recommendations to obtain current team recommendations 146 (although this is not required and the team rules 140 may be altered over time to try to enhance the team recommendations 146 and performance achieved using such recommendations 146). The securities selection module 124 may be used to help a user of node 160 to select a set of securities 142, as discussed above, and, in so-me embodiments, it is also adapted to use the team 136 as part of a stock screener or screening tool to rate or provide recommendations on stocks input to the team or to retrieve stocks that the team recommends by processing the team recommendations 146 to obtain all positive recommendations. An alert service module may also be provided such as part of the GUI generation module 128 to monitor the team recommendations 146 on an ongoing or periodic basis and when an upgrade, downgrade, or other event occurs for one of the team members 140 to determine new team recommendations 146. When the recommendations 146 for the team 136 are effected, an alert such as an e-mail, a text message, a voice mail, or other alert may be communicated to a user of the node 160 or other consumer of such an alert service (e.g., alert delivered via node 160 and/or another communication device such a wireless communication device).
[0040]FIG. 2 illustrates an exemplary research team formation and use process 200 according to the invention, and the process 200 will be discussed with reference to FIG. 1 as it may be implanted by operation of the system 100 and with reference to FIGS. 3-8 which provide interfaces or pages and reports that may be generated as part of process 200 to enable user input and to provide output or products from the system 110 to a user of a client node 160. The process or method 200 starts at 204 such as with loading of the modules of an analyst system 110 on one or more computing devices and by providing access to market data 156 to the analyst system 110 to allow performance measurements to be calculated by the analytics module 114. At 204, client nodes 160 may also be provided access to the analyst system 110, e.g., to allow investors to select a research team 136. At 210, the method 200 continues with the building of a database of historic performance information for a set or number of individual research providers (e.g., those firms or individual analysts that can be chosen to be as members 138 of teams 136). In some embodiments, the performance measurements are determined based on one or more rating methodologies 116 while in some embodiments step 210 is not performed until performance measures are requested by a user such by making a query via a GUI 164 on a node 160.
[0041]With this in mind, the method 200 continues at 220 with the analyst system 200 functioning to provide at the client node 160 a list of individual research providers along with all or subsets of the historic performance for such providers. For example, the GUI generation module 128 may act to provide one or more research team screenshots 168 on GUI 164 in response to a user querying the system 110 for information on which stock analysts and/or research providers are available as team members 138 and for which performance measurements have been determined or can be readily determined by analytics module 114. For example, FIG. 6 illustrates a screenshot 600 of a representative page that may be displayed on the client node 160 through operation of the research team selection module 120 and the GUI generation module 128 (and, in some cases, a browser or similar application on client node 160). Page or screenshot 600 will be described in more detail below but for now it is useful to note that a build team window 630 is included that allows members to be listed and added, such as by selection of button 640 with a keyboard, mouse, or other input device and positioning of icon 350.
[0042]To determine which analysts from the set of available analysts to include on a team 136, it is often useful to review their prior performance as determined at 210 to identify their strengths and weaknesses. As part of step 220, all or subsets of such performance measurements is provided or reported to a requesting user. FIG. 3 illustrates a screenshot or web page 300 that the system 110 may present to a user of a node 160 as part of performing step 220. In screen 300, the frame indicates that a user has chosen provider selection 310 and analysis or analytical tools 312 within this selection 310. The user car also choose, such as by positioning of icon 350 and input on a user input, to view the list of available individual research providers at 314, choose to view their previously formed research teams 136 at 316, and/or choose to view their set of securities or coverage lists at 318. With reference to step 220, the window 320 shows a list of performance measurements that the user can request for display in a subwindow of window 320 or in another page or screen shot (and, in some cases, run by analytics module 114), and these measurements may correspond or build on the ratings methodologies 116.
[0043]As discussed, a variety of performance rating and evaluation methodologies 116 may used to assist a user of client node 160 in selecting team members 138 for a team 136. Typically, a team will outperform its individual members considered separately with a proper set of team rules 140 but better teams are often achievable by selecting analysts or providers that are among the strongest in a particular category or are among the best with regard to a particular performance methodology. With this in mind, a user may view the window 320 and select one of the subsets of performance measurements or results of the listed methodology. These methodologies include an analyst that consistently outperforms their peers at 322, which generally involves the performance analytics engine 114 determining which research providers have outperformed their peers (or at least the peers in the available list of analysts at 314) for a particular period of time such as a recent period (e.g., last 3 to 6 months) or over a longer period of time (e.g., last 1 to 3 or more years). When 322 is selected, a listing, report, table, chart, or other report is typically transmitted from the system 110 to the requesting client node 160 for display at 168 on GUI 164 or for outputting as a hard or electronic copy.
[0044]Another rating methodology is shown at 324 to be determining which analysts are better at buy or positive recommendations than at sell or negative recommendations. This is significant because many firms rarely issue a truly negative recommendation due to conflicts of interest or other issues, and as a result, these firms or analysts are generally unable or are at least slow in predicting when a security should be sold but are still very competent at making buy recommendations for companies. FIG. 4 illustrates a page or screenshot 400 that may be provided to a requesting client node 160 to display such a performance measurement for the available independent research providers. Window 420 includes a results chart 421 that shows the performance of a number of research providers with a "best" provider or high performing analyst shown at 422 with other providers shown at 426. The ratings or placement of the providers 422, 426 is based in this case on return on positive ratings for the last 5 years on a 5-point scale (or normalization to such a scale) and also on return on negative ratings for the last 5 years on a similar 5-point scale (which typically will have two negative ratings below a neutral or hold rating or recommendation and two positive ratings or recommendations above a neutral or hold rating). As shown, the "Provider1" as shown at 422 outperforms his peers both in regard to return on positive ratings and in regard to return on negative ratings or recommendations. Other providers such as "Provider6" outperform their peers (or median) in regard to their positive ratings or recommendations while significantly underperforming their peers with regard to their negative ratings or recommendations. Section 423 of window 420 provides details or performance results for a selected provider from the chart 421 (i.e., for "Provider1" in this example). The information can be requested by a user of the system 111 for use in selecting one or more team members 138 for their teams 136 and for deciding what weights to apply to the votes or recommendations of such team members 138 and how best to combine the recommendations into an aggregate or combined team recommendation 146. For example, Provider6 who is shown to be good at providing positive recommendations but not negative recommendations may be weighted more for buys than for sells while Provider1 who is shown to excel at making both recommendations may be equally weighted or have a heavier weight than Provider6 for sells (and, optionally, for buys). As will become clear from further description of FIG. 6 and step 240 of method 200, each team member 138 of a team 136 is able to provide both positive and negative recommendations on any covered stock and a user can assign different weights for each team member 138 and for each type of recommendation (i.e., positive or negative or, in some cases, neutral).
[0045]Referring again to FIG. 3, another methodology 326 involves determining research providers' batting averages. These averages refer to the concept of a provider making a call (e.g., an upgrade to a buy or a downgrade to a sell or other positive or negative recommendations) and determining the percentage of the time that the call is in the right direction (e.g., if the call was a positive recommendation did the stock's price increase afterwards, if the call was a negative recommendation did the stock's price decrease afterwards, and the like). FIG. 5 illustrates a screen or page 500 that may be provided to a client node 160 when link 326 is chosen in screen 300 of FIG. 3. As shown, a window 520 is provided with a performance or results graph 521 that rates or shows the performance of a number of individual research providers or analysts with regard to their batting averages, as may be determined by analytics module 114 implementing the "batting average" methodology of performance evaluation. The chart 521 shows the providers batting average along the X-axis with Provider2 and Provider 4 outperforming their peers and the median of all providers. The Y-axis of chart 521 is used to show the performance measurement of return achieved or achievable by an investor that followed all of the ratings or recommendations of the same research providers over the past year. For both axes, the number of stocks (or "symbols") tracked in the analysis was relatively large at over 1,150, which is indicative of the large volume of calculations that are performed by a performance analytics module 114 of embodiments of the invention to assist a user in selecting appropriate team members 138. As will be appreciated, it may be useful to have one or two team members 138 on a research team 136 that have high batting averages regardless to return as batting average is indicative that their "calls" are in the correct direction and/or it may be useful to select team members 138 that have both a high batting average and a high return as may be shown in chart 521 for providers in the upper right quadrant. Area 523 of window 420 is used by system 110 to deliver or present explanatory information regarding the performance information generated by the analytics module 114.
[0046]Using the interface screen 300 of FIG. 3, a user may select other performance data generated by the analytics module 114. For example, a user may select a methodology referred to at 328 as "comparing conviction of rating with return" which analyzes whether an analyst's use of a 3-point scale such as buy, hold, or sell or a 5-point scale that may add strong buy and strong sell to the 3-point scale makes a difference in returns obtained using their recommendations. Alternatively or in addition, a user may select at 330 an analysis referred to here as "independent research versus investment bank research" that compares the performance of independent research providers against the performance of affiliated providers such as investment banks that are covering the same stocks or stock sectors (e.g., does independence necessarily lead to better performance?). At 332, a user may choose a methodology that looks at the size of the research coverage universe versus a provider's returns to try to determine whether research providers that cover large numbers or small numbers of stocks perform better or if there is no discernable difference. At 334, a user may choose a performance analysis methodology 116 that involves comparing types of analysis, philosophy, and research methodologies used by various research providers to determine whether and how such choices may effect performance. Each of these subsets of performance information (and others not shown but considered within the breadth of the invention) may be provided to a user at a client node (or otherwise by delivering a hard or electronic version) at step 220 of process 200 to assist the user in picking the members 138 of a research team 136 that may complement each other to provide enhanced combined recommendations 146.
[0047]Referring again to FIG. 2, the method 200 continues at 230 with receiving from the client node 160 a user's selection of two or more individual research providers 138 to establish a research team 136. In some embodiments, this will be in response to the user interface screen 600 of FIG. 6 or a similar page, form, or interface being generated by GUI generation module 128 and displayed on node 160 as shown at 168 in FIG. 1. The screen or page 600 includes a window 620 that may be displayed when "My Research Teams" 316 is selected in the "Provider Selection" section 310. An area 624 is provided that lists previously formed teams at 626 and the team being created or modified (e.g., having its weighting or aggregation rules changed or adding or deleting members) at 628 (e.g., a text box where a default or custom name may be provided). As can be seen, a single user can create more than one team as shown in FIG. 1 at 136 and the teams may have the same members 138 with differing team rules 140 or may have different members 138 with the same or different rules (e.g., different rules may be used when the team is to be used to watch different sets of stocks 142 or for providing recommendations in differing market conditions or the like).
[0048]FIG. 6 shows a region or subwindow 630 to assist a user in building their team, and as shows, area 632 provides a list of five team members 138 that have been selected by a user at step 230 for inclusion on the team indicated or named at 628. If a user wants to provide additional members 138 (or, in some cases, delete members 138), they may move icon 350 to "Add Members" button 640, and at that point, a pull down or other listing of the available individual research providers is provided (or a user may type in or otherwise provide a name or identifier for an additional member) At step 240 of method 200, the system 110 provides a user of a client node 160 the default weighting provided to each team member and the user provides their settings for these weights (e.g., acceptance of the default settings and adjustments). In some embodiments, identical weights are applied to both positive and negative recommendations (e.g., if an advisor's recommendation or rating is given a weight of 25 percent this is used for both buy and sell type recommendations).
[0049]In other embodiments as shown in FIG. 6, a separate weight is applied to the positive and to the negative ratings or recommendations of each individual research provider (although, for some providers, the weights may be equal for each type of rating as chosen by a user/default values). As discussed with regard to the performance analysis of the providers recommendations, it is often desirable to play to an analyst's strengths by weighting tie type of recommendations they are better at providing more heavily than the recommendations that are not their strength, which may even be rated at zero such that a particular type of recommendation from that analyst is given no weight (i.e., is not considered as part of determining a team recommendation 146). Region 650 of window 620 includes settings indicative of rating weights for each member of the research team being defined by a user. A column of input boxes (e.g., pull down boxes or the like) is provided for positive recommendations 654 and for negative recommendations 658. In one embodiment, a default value for each member is to have an "average" weight that may be provided in percentages that add up to 100 percent (or weights from 0 to 100 with the total being 100 without any units) but, of course, numerous other weighting algorithms may be used to provide weighting to each team member's recommendations. For example, with 5 team members as shown on a team, the default weighting would be 20 percent for both positive and negative recommendations or ratings for a stock. If the user provides no modifications or inputs, the team members' votes or ratings would all be treated equally (e.g., each receive "1" vote). However, more typically, the weights are selected to emphasize the strengths of the analysts as identified by the analyzed performance at step 220. For example, one of the research providers is shown in columns 654, 658 to have equal weighting for each of their recommendations but at 25 percent because one provider is not allowed to provide input or is not considered for positive recommendations. Likewise, two analysts are weighted as zero for negative recommendations as they may have a history of not accurately picking such ratings based on a particular performance metric, but they are included in the team to have their positive recommendations considered in the team recommendation. Further, one member is only included for their negative recommendations and another is included for both recommendations with their negative recommendations weighted more heavily (e.g., they are better at predicting sells but are also relatively good at buys). The combinations of the weightings are nearly infinite with the specific weights shown only being provided as one example and not as a limitation.
[0050]At 250, the user of the client node inputs a selection of the team rules 140 that are received at the analyst system 110 and used during validation/testing and during use of the team to determine the team recommendations 146. Referring to FIG. 6, a team rule entry area 660 is provided with a text or pulldown box 666 in which a user can view any default team rules and select from a list of available rules for aggregating the recommendations of the team members. The team manager module helps a user construct a research team to emphasize an individual provider's strength within the team, such as over-weight their buys or sell recommendations. A research team is built by choosing weightings, rules and coverage preferences. The system then generates a history of buy, sell and hold recommendations for that team. The team can be plotted on the scatter plot and analyzed against peers, by portfolio, by industry, sector or security.
[0051]Applying team rules in one embodiment involves selection among five rule categories including: average, majority, consensus, unanimous to buy and one to sell, and unanimous to sell and one to buy. For average, the average of the individual providers ratings are calculated in order to create a team recommendation. The positive and negative weights of the individual team member ratings are applied and the average rating is calculated. For majority, at least half of the team members supplying a rating must agree in order to create a team recommendation. The positive and negative weights of the individual team member ratings are applied and the majority rating is calculated. For consensus, all team members supplying a rating must agree in order to create a team recommendation (i.e., weights do not apply). For unanimous to buy and one to sell, all team members supplying a rating must agree to a buy for a team recommendation of buy, but if one team member goes to sell, then the team recommendation prompts a sell (i.e., weights do not apply). Weights are taken into account for the average and majority rules only. Rating weights do not need to be set for unanimous to buy, one to sell or consensus. The total for positive or negative weightings is based on the analyst's preference and while the dialog box has values from 1-100, any positive integer is valid and numbers greater than 100 are also valid. For example an analyst gives 2× the weight of a single provider, effectively doubling their rating within the aggregate score. Additional areas an analyst can define in order to produce a research team include: opinion required, and coverage required. For opinion required, the provider is required to have an opinion in order for there to be a team rating. For rating coverage, in order for a team rating to be generated at least X of Y team members must cover the stock for a team to form an opinion. This defaults to a minimum of one team member.
[0052]Other team rules may be used that do not use the weights. For example, a user may decide to have the team recommendation determined by a majority of the team members. When this team rule or recommendation aggregation rule is applied, more than half of the members must agree to either buy or sell (or make a positive or negative recommendation) before such a rating or recommendation is generated for the team. Use of this rule may tend to encourage the inclusion of an odd number of team members such as 3, 5, 7, or more team members to avoid ties but this is not a requirement. The user may at 250 also decide to use a "consensus" rule in which all must agree to buy or sell (or provide a positive or negative) recommendation with just one dissenter being allowed to block the recommendations of all the other members. Further, a user may select in box 666 to have the team rule require that a positive or buy recommendation requires unanimity while only one negative or sell recommendation may be required to make a negative or sell recommendation from the team. With the above discussion understood, other recommendation combination rules will be apparent to those skilled in the art and are considered within the breath of the concept of applying a team rule to combine the team members' recommendations with or without weighting being applied.
[0053]Further rules or team settings may be provided such as selection of a box 634 to indicate that a team recommendation cannot be generated if one or more selected team members does not follow a stock or otherwise has not provided a recommendation (e.g., certain team members may be considered critical to achieving an accurate team recommendation). Similarly, a setting at 670 may be entered by a user to require a particular member of the team to follow a stock before the team can generate a recommendation, and when that number of recommendations from the team members is not present the team will not issue a recommendation or issue a statement or report indicating there the stock is not followed (e.g., "no recommendation available" or "this stock is not followed by a required quorum of the team" or the like). Once the members are selected and rules and weights set the team can at least temporarily be saved in memory 130 by selecting button 680.
[0054]The method 200 continues at 260 with validating or testing the research team 136 defined by the user based on a default set of securities (e.g., all securities, a particular subset of securities, or the like) or a user-provided subset of securities (e.g., the set of securities 142 defined by the user as ones they wish to track or have coverage such as those in their fund or considered for addition to their portfolio). The testing or validation also is performed over a default time period such as the past year, past two years, past three to five years, or the like or a time period selected by the user (e.g., a time period corresponding with a particular market trend such as a bull market or bear market or a particular economic environment). The testing or validation may also be performed based on a default or user-selected methodology 116 such as batting average, outperforming peers, or the like as discussed above with regard to determining performance of analysts at step 220 with performance analytics module 114. In a testing or validation step 260, the analytics module 114 uses the team weights and team aggregation rules compared to historic market data 156 to determine how the research team would have performed based on their actual, historic recommendations, which are also available in the market/historic data 156 (or in a separate database that stores the research of the providers or analysts). For example, the performance of the research team is determined for investing in a set of stocks over a particular time period using team recommendations 146 created by retrieving prior recommendations of the team members 138 for the stocks and generating team recommendations 146 using the team rules including weighting and aggregation rules.
[0055]At 270, the team's performance and/or recommendations are reported to a user by generating a report or displaying a chart or graph on the client node 160. Such reports or charts may provide the team's performance or ranking relative to the individual team members, to all available research providers, and/or to market benchmarks. For example, the system 110 may generate at 270 an alpha chart 700 as shown in FIG. 7 that can be provided to the client node 160. As discussed earlier, alpha is a measure of a differential between the team's performance and a benchmark such as a market index. As shown, the alpha chart 700 includes a hold portion 710 in which the research team was able to provide alpha, alpha return, or, simply, differential return 714 over the index return 712 as measured with average returns over time using a 5-point rating or recommendation scale. Similarly, in underperform and sell portions 720, 724 (e.g., negative recommendations), the performance information indicates the team was able to outperform the market index or provide an alpha. Likewise, during positive recommendations of buy and outperform 728, 730, the research team's recommendations led to increased returns or an alpha compared to the market index or benchmark. FIG. 8 shows a research team report 800 with a return or performance chart 820 that shows the research team 822 has outperformed (or provided an alpha) over the individual research providers 826, which in this example were the individual members of the team (as shown in the team member overview provided in the left hand portion of the report). As shown in this test of the formed research team, the team's recommendations led to better returns in the prior 1 and 5 year periods than any of the individual members of the team and also provided a better batting average for both buys and sells.
[0056]The method 200 continues at 274 with a determination of whether the user wishes to modify the team or pick a new team. If so, the method 200 returns to 230 (or to 220). If the research team had produced significant out performance as shown in chart 700 as shown in FIG. 7 and a report 700 as shown in FIG. 8, the user may decide not to change the team or its rules, but the user may wish to build another team to try to achieve better performance than that achieved with the existing research teams or a team that is able to achieve an alpha in particular market or financial environments or in a particular stock sector or the like. In other cases, the user may attempt to slightly modify the rules such as weighting to try to improve the performance of the research team The method 200 also may continue at 276 with a determination of whether to retest the team 276, which may be useful to check if the team performs better over differing time periods (e.g., over differing economic trends, markets, and the like) or to apply a differing performance methodology to validate or test the research team using historic market data and historic recommendations of the team members. At 278, the user can select to change the rules of the team, too, prior to retesting which returns control to step 240 or can retest at 260 such as by changing the time period for validation. The method 200 then ends at 290.
[0057]Use of the formed team is not shown in the method 200 of FIG. 2, but it will be understood that once a research team is formed that it may be used prospectively to make investment decisions. For example, research from team members may be ordered and processed as a stock screener to determine when to add new investments to a portfolio or fund. In other cases, the research team, its data or research including stock recommendations, and the team rules may be processed by system 110 or other modules/systems to track a set of securities 142 and determine when stocks should be bought, held, and sold based on the teams current recommendations 146 for each of these stocks. Further, alerts may be issued when there is change to one of the members recommendations a call, an upgrade, a downgrade, or the like that effects the team recommendations. Further, with reference to the method 200, the user may have the option of manually selecting the team members such as after reviewing the team members' historic performances as discussed above or the user may choose to have their members chosen based on input criteria. For example, a user may choose to add a team member with a particular ranking when a performance methodology is considered, e.g., select the highest ranked predictor of sells, the highest ranked batting average analyst, the highest ranked momentum analyst, and the like. Further, in some embodiments, the "default" weights may be chosen by the system 110 based on the determined performance of each of the team members relative to the other members (e.g., an automated weighting to highlight the strengths and weaknesses of the team members) such as by using proportions based on the rankings or returns of the team relative to the other members or the like.
[0058]FIG. 9 illustrates a system flow diagram 900 illustrating operation of a system according to the invention for team selection, management, and use such as may be achieved with system 100. As shown, a user or client node operated by a user 904 interfaces with a system such as by inputting data and viewing reports or outputs of the system. The system includes a research provider selection module 910 in which a universe of available symbols or stocks of companies 910 is defined and may include the stocks of a particular stock exchange(s) or be a larger set or a subset of such stocks (e.g., essentially filtering providers by interest list or holdings). At 914, the module 910 may allow a user to apply one or more filters to the universe of symbols and at 918 a dataset of the filtered subset of symbols is generated, and this allows the user 904 to select a set of securities or stocks for coverage by a research team and for use in evaluating performance of individual research providers.
[0059]The system includes a research team manager module 930 that the user 904 uses at 932 to choose a collection of individual research providers to draft a research team, and, as discussed with regard to FIG. 2, the members are often selected based on their historic performance or rankings. Each of the drafted teams and their team members are stored in memory at 933. At 934, the system functions to allow the user 904 to create a recommendation rule or team rule for defining how the recommendations of each of the team members will be processed on each of the teams to allow a team recommendation to be generated, and the rules are stored in memory at 935. At 936 a script of the team rule may be generated and then compiled at 938 for later application to recommendation information for the team members. She research team manager 930 is shown at 940, 942, and 944 to act to determine from analytics data rating or recommendation history 944 recommendations of both the individual research providers and the research team on which they are members at 942 with team recommendations being determined at 940 using rules 938. At 946, the research team manager 930 may act to determine new or updated recommendations of an individual research provider on one of the teams 933, which may be provided by continuous updates or change detection at 948 in the research provider reports data (e.g., processing of inbound data feeds from a data acquisition group or DAG and/or a document management architecture or DMA) that triggers at 949 an update signal or alert.
[0060]The system further includes a performance analytics engine 960 that may be requested at 950 by the research team manager 930 to recalculate team and/or individual research provider performance or rankings. To this end, the engine 960 may periodically such as once a day obtain at 962 analyst data including recommendations and at 964 the closing price of stocks, such as those in the dataset defined at 918 and/or that are associated with research provider recommendations. At 966, stocks that are being tracked have their prices update and the analytics database is updated at 968 to reflect performance details based on the providers ratings or recommendations. At 9703 it is indicated that the engine 970 may be rerun periodically such as once per day or in response to a query 950. At 974, the engine 960 functions to update recommendation or rating history tables and performance based on a particular analysis methodology and this information is stored in memory at 978.
[0061]In some cases, the methodology employed by the engine 960 to determine team and individual research provider performance is a total return-based methodology (including, in some cases, dividend reinvestment) that provides meaningful return experiences for direct comparison to other investments, providers, and benchmarks. The methodology determines out-performance or under-performance for all recommendations or ratings from buy, sell, and hold periods (e.g., see the alpha chart of FIG. 7). When combined with scoring or other techniques, this methodology can provide relative performance comparisons to determine impact of rating conviction for analysts that provide 5-point rating scales as well as other scales such as 3-point rating scales.
[0062]FIG. 10 provides another data flow diagram 1000 that illustrates data flow during operation of a system according to the invention (such as system 100 of FIG. 1). As shown, a script engine 1020 provides input data/messages to a methodology data calculation module 1040 (e.g., analytics engine 960 of FIG. 9 or performance analytics module 114 of FIG. 1 or the like) and rules module 1030. The messages or information required by module 1040 may be provided by market information and/or research provider reports or database 1010 and from input from a user via their Web or other node 1014. The messages/information includes pricing updates 1022 regarding monitored stocks (e,g., 5000 or more stocks or a subset of the stock symbol universe). Recommendation updates 1024 are also tracked and when an analyst makes a call such as an upgrade or downgrade this information is retrieved by the engine 1020 and passed to the module 1040 for updating performance information.
[0063]A user may create and update teams with communications 1028 that are passed by script engine 1020 from the Web browser or client node 1014 to the research team module 1060 via rules module 1030 that is used to update and track team rules such as weighting and aggregation algorithms for generating team recommendations and via calculation module 1040 that uses the teams and its rules to determine team performance and its recommendations. Research provider module 1050 is used to provide a listing of available research providers (e.g., from one to 200 or more) and in some cases to provide research provider reports which may also be provided by module 1010. A user may request at 1026 that the data be rolled up or combined to generate performance reports that compare the performance of a generated research team with its individual members and to report on the teams recommendations and ability to generate alpha over time. The information that is output to the user from the calculation module 1040 may be considered a wrapped library of data from the research team 1080 that is stored in memory.
[0064 method 200 and flow shown in FIGS. 9 and 10 is not intended to indicate a mandatory order of steps or processing, and many of the functions of the invention may be performed in any order and/or may be repeated as useful to better select, validate/test, and use research teams made up of individual research providers or analysts. Prior to the invention described herein, there was no analytics tool or process that generated teams of research providers whose recommendations were processed according to customizable weighting and/or rules to generate improved investment recommendations (e.g., buy, hold, sell, and similar recommendations), which generate significant alpha relative to benchmarks when they are implemented by a money or asset manager or other investor in securities. Prior technology was useful for generating performance data on individual research providers based on historical financial data such as prior recommendations for stocks and the stocks performance after such recommendations. For example, the prior performance analysis technology may have been used to determine a research analyst's such as an investment bank's batting average (i.e., consistency), return (i.e., performance), and the like, but the inventive methods and systems described in this document were the first to roll up performance to allow a user or customer to create a research team and then apply rules such as weighting algorithms and aggregation rules to generate recommendations that clearly perform better than recommendations of the individual team members and often better than accepted market performance benchmarks.
Patent applications by James Tanner, Boulder, CO US
Patent applications by WALL STREET ON DEMAND
Patent applications in class Portfolio selection, planning or analysis
Patent applications in all subclasses Portfolio selection, planning or analysis
User Contributions:
Comment about this patent or add new information about this topic: | http://www.faqs.org/patents/app/20090006268 | CC-MAIN-2014-23 | en | refinedweb |
The GNU Debugger (gdb) is the most popular open source debugger in use. Originally designed for C, it's been ported to debug code in many languages on a variety of computing systems, from tiny embedded devices to large-scale supercomputers. It's generally used as a command-line executable, but it can be accessed through software using the little-known MI protocol. This article explains how MI works and how the CDT uses MI to communicate with gdb. This concrete example of CDT-debugger interaction should be helpful for anyone wishing to interface a custom C/C++ debugger from CDT.
The Java™ classes discussed here build on the classes and interfaces provided by the CDI, introduced in Part 1 of this "Interfacing with the CDT debugger" series. To remove any confusion, let's be clear about the difference between the CDI and MI:
- The C/C++ Debugger Interface (CDI) was created by Eclipse/CDT developers so CDT can access external debuggers.
- The Machine Interface (MI) was created by gdb developers so external applications can access the gdb.
This may look like a straightforward distinction, but many of the classes I'll present straddle both the CDI and MI, and sometimes it's hard to see where one interface ends and the next begins. Once you understand how the CDI and MI work together, you'll be better able to link custom debug tools to the CDT, whether they're based on gdb or not
Understanding the GNU Debugger Machine Interface (gdb/MI)
Most people access gdb from a command line, using simple instructions like
run,
info. This is the human interface to gdb. A second method of
accessing gdb was developed for interfacing the debugger with software: the Machine
Interface (MI). The debugger performs the same tasks as before, but the commands and output responses differ greatly.
An example will make this clear. Let's say you want to debug an application based on the code below.
Listing 1. A simple C application: simple.c
int main() { int x = 4; x += 6; // x = 10 x *= 5; // x = 50 return (0); }
After you compile the code with
gcc -g -O0 simple.c -o
simple, a regular debug session might look like Listing 2.
Listing 2. A debug session
$ gdb -q simple (gdb) break main (gdb) run 1 int main() { (gdb) step 2 int x = 4; (gdb) step 3 x += 6; // x = 10 (gdb) print x $1 = 4 (gdb) step 4 x *= 5; // x = 50 (gdb) print x $2 = 10 (gdb) quit
Listing 3 shows how the same gdb session looks using MI commands (shown in bold).
Listing 3. A debug session using MI
$ gdb -q -i mi simple (gdb) -break-insert-main ^done,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",addr="0x00401075", func="main",file="simple.c",fullname="/home/mscarpino/simple.c",line="1",times="0"} (gdb) -exec-run ^running (gdb) *stopped,reason="breakpoint-hit",bkptno="1",thread-id="1",frame={addr="0x00401075", func="main",args=[],file="simple.c",fullname="/home/mscarpino/simple.c",line="1"} (gdb) -exec-step ^running (gdb) *stopped,reason="end-stepping-range",thread-id="1",frame={addr="0x0040107a", func="main",args=[],file="simple.c",fullname="/home/mscarpino/simple.c",line="2"} (gdb) -exec-step ^running (gdb) *stopped,reason="end-stepping-range",thread-id="1",frame={addr="0x00401081", func="main",args=[],file="simple.c",fullname="/home/mscarpino/simple.c",line="3"} (gdb) -var-create x_name * x ^done,name="x_name",numchild="0",type="int" (gdb) -var-evaluate-expression x_name ^done,value="4" (gdb) -exec-step ^running (gdb) *stopped,reason="end-stepping-range",thread-id="1",frame={addr="0x00401081", func="main",args=[],file="simple.c",fullname="/home/mscarpino/simple.c",line="4"} (gdb) -var-update x_name ^done,changelist=[{name="x_name",in_scope="true",type_changed="false"}] (gdb) -var-evaluate-expression x_name ^done,value="10" (gdb) -var-delete x_name ^done,ndeleted="1" (gdb) -gdb-exit
The
-i mi flag tells gdb to communicate using the MI
protocol, and you can see the difference is significant. The command names have changed
dramatically and so has the nature of the output. The first line of the output record
is either
^running or
^done,
followed by result information. This output is called a result record, and it
can include
^error and an error message.
In many cases, the MI result record is followed by
(gdb) and
an out-of-band (OOB) record. These records provide additional information about the
status of the target or the debugging environment. The
*stopped message after
-exec-step is an
OOB record that provides information about breakpoints, watchpoints, and why the target
has halted or finished. In the previous session, gdb returns
*stopped,reason="end-stepping-range" after each
-exec-step, along with the status of the target.
gdb/MI is hard for humans to understand, but it's ideal for communication between software processes. The CDT enables this communication by creating a pseudo-terminal (pty) that sends and receives data. Then, it starts gdb and creates two session objects to manage debug data.
Starting the debugger
As described in Part 1, when the user clicks Debug, the CDT accesses an
ICDebugger2 instance and calls on it to create an
ICDISession. This debugger class must be identified in a plug-in that
extends the org.eclipse.cdt.debug.core.CDebugger extension point. Listing 4 shows what this extension looks like in the CDT.
Listing 4. The CDT default debugger extension
<extension point="org.eclipse.cdt.debug.core.CDebugger"> <debugger class="org.eclipse.cdt.debug.mi.core.GDBCDIDebugger2" cpu="native" id="org.eclipse.cdt.debug.mi.core.CDebuggerNew" modes="run,core,attach" name="gdb Debugger" platform="*"> <buildIdPattern pattern="cdt\.managedbuild\.config\.gnu\..*"> </buildIdPattern> </debugger> </extension>
This states that the GDBCDIDebugger2 implements the
createSession() method that begins the debug process. When the CDT
calls this method, it provides the debugger with the launch object containing
configuration parameters, the name of the executable to be debugged, and a progress
monitor. The GDBCDIDebugger2 uses this information to form a string that starts the gdb executable:
gdb -q -nw -i mi-version -tty pty-slaveexecutable-name.
The GDBCDIDebugger2 creates an
MIProcess for the running gdb
executable, then creates two session objects to manage the rest of the debugging
process:
MISession and
Session.
The
MISession object manages communication to the gdb, and
the
Session object connects the gdb session to the CDI
described in Part 1. The rest of this article discusses these session objects in detail.
The
MISession
After starting gdb, the first thing the GDBCDIDebugger2 does is create an
MISession object. This object handles all access to the gdb debugger using three pairs of objects:
- An
OutputStreamto send data to the gdb process and an
InputStreamto receive its response
- An outgoing and incoming
CommandQueueto hold MI commands
- A
TxThreadthat sends commands from the output
CommandQueueto the
OutputStreamand an
RxThreadthat sends receives commands from the
InputStreamand places them in the input
CommandQueue
An example will demonstrate how these objects work together. If the debug session is
conducted remotely, the CDT initiates communication by sending a
remotebaud command to gdb, followed by the baud rate. To accomplish
this, it calls the
MISession's
postCommand method, which adds the
remotebaud command to the
session's outgoing
CommandQueue. This wakes the
TxThread, which writes the command to the
OutputStream connected to the gdb process. It also adds the command
to the session's incoming
CommandQueue.
Meanwhile, the
RxThread is constantly reading the
InputStream from the gdb process. When new output is available, the
RxThread sends it through the
MIParser to acquire the result record and the OOB record. It then
searches through the incoming
CommandQueue to find the gdb
command that prompted the output. Once the
RxThread
comprehends the gdb's output and the corresponding command, it creates an
MIEvent used to broadcast the change in the debugger's state.
As data is transferred to and from gdb, the
TxThread and
RxThread create and fire
MIEvents. For example, if the
TxThread sends a command changing a breakpoint to gdb, it creates an
MIBreakpointChangedEvent. If the
RxThread receives a response from gdb whose result record is
^running, it creates an
MIRunningEvent.
These events are not implementations of the
ICDIEvent interface described in
Part 1. To see how
MIEvents and
ICDIEvents relate, you need
to understand the
Session object.
Session,
Target, and
EventManager
After creating the
MISession, the GDBCDIDebugger2 creates a
Session object to manage the operation of the CDI. When its
constructor is called, the
Session creates many objects to
assist with its management responsibilities. Two objects are particularly important:
the
Target, which manages the CDI model and sends commands to
the debugger, and the
EventManager, which listens for
MIEvents created by the debugger.
As Part 1 explains, the
Target receives debugging commands from the CDT and packages them for
the debugger. For example, when you click the Step Over button, the CDT finds
the current
Target and calls its
stepOver method. The
Target responds by
creating an
MIExecNext command and calling
MISession.postCommand() to execute the step. The
MISession adds the command to its outgoing
CommandQueue, where it's transferred to the debugger in the manner described earlier.
The gdb output, packaged into an
MIEvent, is received by the
session's
EventManager. When this object is created, it's
added as an Observer for the running
MISession. When
the
MISession fires
MIEvents, the
EventManager interprets them and creates corresponding
ICDIEvents. For example, when the
MISession fires an
MIRegisterChangedEvent,
the
EventManager creates a CDI event called
ChangedEvent. After creating the CDI event, the
EventManager notifies all interested listeners that a state change
has occurred. Many of these listeners are elements in the CDI model, but an important
exception is an object called
CDebugTarget. This is part of another model hierarchy, explained next.
The CDI and the Eclipse debug model
For your debugging plug-in to interface the Eclipse debug views, such as
Register View and Variable View, you have to play by Eclipse's rules: You
have to use events and model elements taken from the Eclipse debug platform. The root
element in the Eclipse debug model is an
IDebugTarget, and
other elements include
IVariables,
IExpressions, and
IThreads. If these names
look familiar, it's because the CDI model hierarchy is structured after the Eclipse
debug model hierarchy. But the CDI model and the Eclipse debug model can't talk directly to one another.
For this reason, the CDT contains a set of classes that wrap around CDI classes to
provide a bridge between the CDI model and the Eclipse debug model. The
CDebugTarget is the root of this wrapper-model hierarchy, and it
listens for events fired by the CDI
EventManager. When it
receives a new event, the
CDebugTarget processes a large set
of
if and
switch statements to
determine how to respond. For example, if the CDI event is an
ICDIResumedEvent, the
CDebugTarget executes the code in Listing 5.
Listing 5. Converting CDI events to
DebugEvents
switch( event.getType() ) { case ICDIResumedEvent.CONTINUE: detail = DebugEvent.CLIENT_REQUEST; break; case ICDIResumedEvent.STEP_INTO: case ICDIResumedEvent.STEP_INTO_INSTRUCTION: detail = DebugEvent.STEP_INTO; break; case ICDIResumedEvent.STEP_OVER: case ICDIResumedEvent.STEP_OVER_INSTRUCTION: detail = DebugEvent.STEP_OVER; break; case ICDIResumedEvent.STEP_RETURN: detail = DebugEvent.STEP_RETURN; break; }
The
CDebugTarget responds to CDI events by
creating
DebugEvents, which are generally related to
stepping, breaking, and resuming execution. After creating these events, it accesses
the Eclipse
DebugPlugin and calls its
fireDebugEventSet method. This notifies all the Eclipse debug
listeners that a state change has occurred. That is, any object that adds itself as a
DebugEventListener receives the
DebugEvent. This includes the Eclipse debug views, such as the Memory View and the Variables View.
The CDT debug views
The MI-CDI-wrapper-Eclipse communication is useful only if it updates Eclipse's graphical display with proper debug data. Figure 1 shows the CDT debug perspective, and you can see the many views that present the state of the target's execution. Many of the views — Breakpoints, Modules, and Expressions — are provided by Eclipse, but CDT adds three views to the perspective: Executables View, Disassembly View, and Signals.
Figure 1. The CDT debug perspective
These views create and receive debug events in similar ways. This section explains the
Signals View. This view, displayed prominently above, lists all the
signals the target can receive and shows which can be passed to the process. When the
view first appears, the
SignalsViewContentProvider calls on
the
CDebugTarget to provide a list of signals. This target
accesses the CDI target and asks it for the signals in its CDI-model hierarchy. When the
array of
ICDISignals is returned, the
CDebugTarget updates its own model elements and sends them to the
SignalsViewContentProvider, which uses them to populate the Signals View.
When you right-click an entry in the Signals View, the Resume with Signal
context-menu option lets you continue the target's execution and send the selected
signal to the process. This option calls on the
SignalsActionDelegate. When this option is selected, the delegate
calls on the CDI target to resume its execution with the
ICDISignal corresponding to the selected signal. The target creates
an MI command for the signal and invokes
MISession.postCommand(), which sends the command to gdb.
When gdb responds, the process of updating the Signals View takes five steps:
- The
MISessionanalyzes the MI output from gdb and determines whether a signal setting is being changed. If so, it fires an
MISignalChangedEvent.
- The CDI
EventManagerlistens for the
MISignalChangedEventand responds by creating a CDI event:
ChangedEvent. Then it fires the event and alerts all
ICDIEventListeners.
- The
CDebugTargetreceives the event from the
EventManagerand determines whether the
ChangedEventrelates to a signal change. If so, it calls on its
CSignalManagerto process the CDI event.
- The
CSignalManagerupdates its model elements and fires a
DebugEventwhose type is given by
DebugEvent.CHANGE.
- The
SignalViewEventHandlerreceives the
DebugEvent, checks to make sure it deals with signals, and refreshes the Signals View.
Understanding the involved operation of the Signals View is important for two reasons; It serves as a concrete example of how the different model elements work together, and it shows how you can build similar views that interact with Eclipse, gdb, and the CDI.
Conclusion
Two session objects (
MISession and
Session), two targets (
CDebugTarget and
Target), and two completely different hierarchies of model
elements — the operation of the CDT debugger is so complicated that you may
wonder whether any of the developers were related to Rube Goldberg. Still, the code for
the CDT debugger was written with modularity in mind, and the better you understand its
inner workings the easier it will be to insert your own modules. And remember: The
learning curve is steep, but adding new features to the CDT is far easier than building
a custom debugging application from scratch.
Download
Resources
Learn
- Visit the Eclipse CDT at Eclipse.org.
- Read the CDT project leader's blog.
-. | http://www.ibm.com/developerworks/library/os-eclipse-cdt-debug2/index.html | CC-MAIN-2014-23 | en | refinedweb |
Hey, Scripting Guy! Greetings from an old VBScript guy. I have a problem with a Windows PowerShell script. It returns the unexpected result of "RANSFER" instead of the desired result of "\TRANSFER". One more thing: How can I use the TrimStart method to cut exactly what was given instead of doing it like it does? Here is the script:
$FullPath ="\\TVVVGXMFIL001D\IT-TEMP\MIHA\TRANSFER"
$PathToRemove ="\\TVVVGXMFIL001D\IT-TEMP\MIHA"
$FullPath.TrimStart($PathToRemove)
- MH
Hi MH,
Cool question! I had to look this up myself. Here is the skinny on the TrimStart method.
The TrimStart method removes from the current string all leadingStart method returns "abc456xyz789".
The behavior of TrimStart is because of the way it is implemented in the .NET Framework; therefore, the Windows PowerShell team does not really have control of it. It is not, however a bug. It is just strange.
TrimChars is an array, not a simple string of characters. So when you have your path to remove, it is looking at an array of letters. In your full path string, the first letter that is not in your array is the letter R, so it trims from there. Here are the first two parts of your code. If you look at each path, the $PathToRemove variable already has the letter T in it, so when we see the T in front of the word Transfer, the T is also removed. So the first letter that is not part of the array that gets created from the $PathToRemove variable will be where the remainder of the string is found. In this example, the part that is left is Ransfer, and not Transfer as you were expecting:
$FullPath="\\TVVVGXMFIL001D\IT-TEMP\MIHA\TRANSFER"
$PathToRemove="\\TVVVGXMFIL001D\IT-TEMP\MIHA"
Here is how I would write what you are trying to do; I would use the Split-Path cmdlet as shown here:
PS C:\>p $FullPath="\\TVVVGXMFIL001D\IT-TEMP\MIHA\TRANSFER"
PS C:\> Split-Path $FullPath -Leaf
TRANSFER
Answer to your last question: It can't—that is just the way that it works.
By the way, this is for free: If you are missing instr, you can use the IndexOF method from the string class. You could then pair that with a substring. This is documented at.
Hey, Scripting Guy! I have just found this script:. It is on the right track, but as usual I want something slightly different. In my case, I want the files I have selected already (normally between 1 and 20) located in a folder on the network to all become read-only files. Some may already be locked, so that's why I liked the checking loop rather than the toggling approach. Is this possible? Ideally the script should be on my C: drive and not in the individual folder next to the target files. I am using Windows XP Professional.- AH
Hi AH,
ReadOnly = 1
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFolder = objFSO.GetFolder("C:\fso")
Set colFiles = objFolder.Files
For Each objFile in colFiles
If objfile.attributes And ReadOnly Then
WScript.Echo objFile.name + " is already readonly"
Else
objFile.Attributes = objFile.Attributes Xor ReadOnly
End if
Hey, Scripting Guy! I am trying to run the following <formatting role="bold">enable mailbox</formatting> command on users who are not mailbox enabled. I then want to write the output to a file. This file is invoked by a .bat file. Here is the script:
$date = ((get-date).toString('MMM-dd-yyyy'))
$outputfile = "c:\scripts\" + $date + '.out'
get-date | Out-File $outputfile -append
get-user -organizationalUnit TestOU |
where-object{$_.RecipientType -eq "User"} |
Enable-Mailbox -Database "ORE\MailBox Database" |
get-mailbox | s
elect name,windowsemailaddress,database |
Out-File $outputfile –append
However, I get an error returned in red text. When I launch the Exchange Management Shell and just copy and paste the commands, it works just fine and writes to my output file. Any idea about what's going wrong here?
- PJ
Hi PJ,
When you open the Exchange Management Shell, the Exchange snap-in gets loaded. When you invoke the script via a .bat file, the Windows PowerShell console is loading, but is not loading the Exchange snap-in. What you need to do is to load the Exchange snap-ins within your script. Add the following line of code to the beginning of your script:
ADD-PSSnapin –name *exchange*
Hey, Scripting Guy! Thank you very much for the Scriptomatic 2.0. I must admit I was surprised to see such a fine tool coming from Microsoft. Unfortunately, I have a problem when I run it on Windows Vista. It gives me an error.- MP
Hi MP,
Because of the way Scriptomatic 2.0 enumerates the WMI namespaces, it must be launched with administrator rights on Windows Vista. And because it is an HTA, it is not aware of the need for administrator rights. This causes Scriptomatic 2.0 to fail, rather than bringing up the UAC dialogue box to request admin rights. To run Scriptomatic 2.0 on Windows Vista, you need to first create a shortcut to the command prompt. Edit the shortcut so that it will run as an administrator. This is due to the fact that an .HTA file is not associated with the runas command. For that matter, neither is the .vbs file. When you have your administrator command prompt, you need to paste the path to Scriptomatic 2.0, or type the path to Scriptomatic 2.0 at the command prompt. When you have done this, you can launch Scriptomatic 2.0 in a normal fashion (if any of this stuff is normal). Going forward, you may want to actually edit the path of the elevated command prompt so that it actually launches Scriptomatic 2.0. PowerShell Scriptomatic is written in C# and uses an embedded manifest, which tells it that it needs to have administrator rights. It therefore will prompt for administrator rights when it is run on Windows Vista.
Ed Wilson and Craig Liebendorfer, Scripting Guys | http://blogs.technet.com/b/heyscriptingguy/archive/2008/11/14/quick-hits-friday-the-scripting-guys-respond-to-a-bunch-of-questions.aspx | CC-MAIN-2014-23 | en | refinedweb |
Creating a Custom Rules Store
The Autoscaling Application Block includes two rules store implementations that you can select from in the block configuration: an XML rules store in Windows Azure blob storage and an XML rules store on the local file system. The first is intended for use when you host the block in Windows Azure, the second when you host the block on-premises. Both share the same XML schema.
You can create your own custom rules store, for example to store the rules in a SQL Server database. In this scenario, both the location of the store and the format of the stored rules would differ from the two existing rules store implementations.
A custom rules store implementation must implement the IRulesStore interface shown in the following code sample.
You should notify the block whenever the contents of your rules store change by using the StoreChanged event so that the block can load the new rule definitions. The GetRules method returns a collection of Rule objects, and the GetOperands method returns a collection of Operand objects.
If you want to pass custom configuration parameters to your custom rules store, your custom rules store class should have a constructor that takes a single parameter of type NameValueCollection, as shown in the following code sample. Note the use of the ConfigurationElementType attribute to decorate the class.
[ConfigurationElementType(typeof(CustomRulesStoreData))] public class CustomRulesStore : IRulesStore { public CustomRulesStore(NameValueCollection attributes) { ... } public IEnumerable<Rule> GetRules() { ... } public IEnumerable<Rules.Conditions.Operand> GetOperands() { ... } public event EventHandler<EventArgs> StoreChanged { ... } }
You must deploy the assembly that implements your custom rules store with the Autoscaling Application Block.
You must tell the Autoscaling Application Block about your custom rules store by using the Enterprise Library configuration tool. The following procedure shows how to configure the block to use a custom rules store.
Configuring the Autoscaling Application Block to use a custom rules store
- To change the rules store implementation to use a custom rules store, click the plus sign icon at the top right of the Rules Store panel and then click Set Rules Store.
- To store your rules in a custom rules store, click Use Custom Rules Store, and then click Yes to confirm the change. Use the Type Name box to identify the type of your custom rules store implementation.
- You can provide any additional configuration data that your custom rules store requires by adding attributes. Each attribute is a key/value pair. The block passes all the key/value pairs to the constructor of your custom rules store class.
Last built: June 7, 2012 | http://msdn.microsoft.com/en-us/library/hh680933(d=printer,v=pandp.50).aspx | CC-MAIN-2014-23 | en | refinedweb |
Get-NetAdapterRdma
Updated: October 17, 2013
Applies To: Windows 8.1, Windows PowerShell 4.0, Windows Server 2012 R2
Get-NetAdapterRdma
Syntax
Parameter Set: ByName Get-NetAdapterRdma [[-Name] <String[]> ] [-AsJob] [-CimSession <CimSession[]> ] [-IncludeHidden] [-ThrottleLimit <Int32> ] [ <CommonParameters>] Parameter Set: ByInstanceID Get-NetAdapterRdma -InterfaceDescription <String[]> [-AsJob] [-CimSession <CimSession[]> ] [-IncludeHidden] [-ThrottleLimit <Int32> ] [ <CommonParameters>]
Detailed Description
The Get-NetAdapterRdma cmdlet gets the remote direct memory access (RDMA) properties of an RDMA-capable network adapter. RDMA is a feature that enables network adapters to transfer data directly between each other without requiring the main processor of the system to be part of that transfer. This results in lower latency and lower processor utilization.RdmaSettingData
The
Microsoft.Management.Infrastructure.CimInstanceobject is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign (
#) provides the namespace and class name for the underlying WMI object.
Examples
EXAMPLE 1
This example gets the RDMA properties from the network adapter named MyAdapter.
EXAMPLE 2
This example displays all the RDMA properties from the adapter named MyAdapter.
EXAMPLE 3
This example gets all RDMA capable network adapters with RDMA enabled.
Related topics | http://technet.microsoft.com/en-us/library/jj130896.aspx | CC-MAIN-2014-23 | en | refinedweb |
Introduction
Everywhere you go, people are using mobile devices to keep in touch with family and friends, take a picture to post on a social website, find the location of a restaurant, or check the latest news headlines. Mobile devices come in many different shapes and styles. Mobile phones run a variety of different operating systems such as Apple's iOS, Google's Android, and Research In Motion's Blackberry. Some have large displays, physical keyboards, and run on 3G, 4G, or WiFi networks. Mobile phones may also have sensors for acceleration, location, or even payments. Some of these devices aren't even phones; they're tablets with larger displays and a data-only network connection.
Despite their differences, mobile devices are similar in that they all run mobile applications. Mobile applications can be divided into two types:
- Native applications
Installed on the device, native applications are binary executable programs created using a software development kit (SDK) and distributed through an app store. There is an SDK for each mobile operating system, which unfortunately is different from the SDKs of other operating systems.
For example, to create an application for iOS, you must download and install the iOS SDK and development tools, and you must code your application using the Objective-C programming language. An Android application is developed using the Android SDK and written in Java. Thus, to create a mobile application, you must learn each SDK and write your application using the supported programming language. There's a steep learning curve for each platform's SDK, so mobile application development is quite complex.
- Web applications
Loaded into the mobile web browser, web applications are different from native applications in that they're coded using web technologies (HTML, JavaScript, and CSS) regardless of the device's operating system. There's no need to learn different programming languages for each device. HTML and JavaScript are familiar to web developers since they're used to create web pages loaded into your desktop browser. For the most part, mobile browsers can render the same web page, but websites often provide a mobile version that has less content and loads faster (due to a smaller screen size and slower network connection).
To "run" a web application, the user enters a URL into the mobile web browser. This loads the web page, which is the entry point into a web application. Web applications are not distributed through an app store; they are simply links that can be included in other web pages, e-mails, or even hard copy..
PhoneGap is a popular toolkit for building hybrid applications. It's an open source mobile framework that includes a JavaScript API for access to device features, such as the accelerometer and camera.
This article shows you how to develop a hybrid mobile Android application using the PhoneGap and Dojo Mobile toolkits. Learn how to use the Android emulator and tools for testing applications, and see how to run your application on an Android device or tablet.
Prerequisites
This article assumes you have some familiarity with the Eclipse development environment, HTML, JavaScript, and CSS. The following software is required:
- Windows, OSX, or Linux operating system
- Java Development Kit (JDK) 5 or JDK 6 (a JRE is not sufficient)
- An Eclipse development environment, such as Eclipse Helios V3.6 or later, or IBM Rational Application Developer V8
- Android SDK and platforms (r12 or later)
- Android Development Toolkit (ADT) plugin for Eclipse
- PhoneGap SDK (V1.0.0 or later)
- Dojo Toolkit (V1.6 or later)
See Resources for links to download the software.
Set up your development environment
To set up the development environment, you need to perform the following steps:
- Install the JDK and Eclipse or Rational Application Developer.
- Download the Android SDK.
- Download and install the ADT plugin for Eclipse.
- Configure Eclipse for Android.
- Install the required Android platforms.
- Create a new Android Virtual Device (AVD).
- Download the PhoneGap SDK.
- Download the Dojo Toolkit.
Install the JDK and Eclipse or Rational Application Developer
The first task is to verify that JDK 5 or greater is installed. If not, download Java SE JDK (see Resources).
You can use either Eclipse or IBM Rational Application Developer (RAD) for this article. Windows or Linux is supported by RAD. OSX developers can use Eclipse.
RAD consists of IBM's version of Eclipse with additional IBM tools to support Java EE, including IBM's Web 2.0 Feature Pack. To use RAD, you will need Version 8 or later. RAD includes IBM's JDK, which is used by default. However, this JDK does not contain the Java packages needed to create and sign an Android application. To use the Java SE JDK instead, you need to replace C:/Program Files/IBM/SDP/eclipse.ini (or the location you installed RAD) with the information in Listing 1.
Listing 1. Content of eclipse.ini for RAD on Windows
-startup plugins/org.eclipse.equinox.launcher_1.1.1.R36x_v20101122_1400.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.2.R36x_v20101222 --launcher.defaultAction openFile --launcher.XXMaxPermSize 256M -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -install C:/Program Files/IBM/SDP -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms40m -Xmx512m
If you want to use Eclipse, you will need Version 3.6 or later. Since you'll be writing web services later in this series, download the Eclipse IDE for Java EE Developers (see Resources). However, if you only want to write an Android application, you can get by with using the Eclipse IDE for Java Developers.
Eclipse and RAD are very similar. In this article, when Eclipse is mentioned it implies either Eclipse or RAD. However, figures of screens are from RAD running on Windows.
Download the Android SDK
The Android SDK (see Resources) must be used to make Android applications. It is a collection of command line programs that are used to compile, build, emulate, and debug Android applications.
If you are installing on Windows, it's easiest to download the zip package and extract it into your C:\ root directory. There are a couple of issues with the r12 installer not being able to detect Java or install into a directory with spaces in it's name.
Download and install the ADT plugin
Android provides an Eclipse plugin that greatly simplifies application development. It integrates with Eclipse to provide a rapid development environment. To install the Android Development Toolkit (ADT) plugin (see Resources), follow the steps below:
- From Eclipse, select the menu items Help > Install New Software… > Add…
- Enter the name and URL location of the software, as shown in Figure 1. Name: Android ADT Eclipse plugin; location:.
Figure 1. Installing Android ADT plugin
- Select Developer Tools, verify that all check boxes are selected, as shown in Figure 1, and then select Next.
- As shown in Figure 2, select Next to accept license agreements and install the items.
Figure 2. Install details
After installation is complete, restart Eclipse.
Configure Eclipse for Android
To configure Eclipse for Android, display the Preferences dialog.
- For Windows, select Window > Preferences > Android.
- For OSX, select Eclipse > Preferences > Android.
For the SDK location, select Browse…, and then choose the directory where the Android SDK is located, as shown in Figure 3:
Figure 3. Specify SDK location to Eclipse
Select Apply and OK.
Add the Android LogCat view to the Eclipse IDE to aid with debugging:
- Window > Show View > Other…
- Android > LogCat
Figure 4. LogCat view
Install an Android platform
To compile an Android application for a particular version, one or more Android platforms must be downloaded and installed. The platforms include the library files and emulators.
From Eclipse, open the Android SDK and AVD Manager, which is used to manage the Android SDK versions and emulators used with your applications.
Select Window > Android SDK and AVD Manager, as shown in Figure 5:
Figure 5. Menu item for Android configuration
Install the SDK platforms required for the versions of Android on which you wish to run your applications.
The example application will be using GPS location services, so you should select and install a platform based upon the Google APIs. For example, select Google APIs by Google Inc., Android API 8, revision 2, as shown in Figure 6. If you are not using GPS, then you can install platforms listed under the Android Repository category.
For Windows installations, select the Google USB Driver package to provide support for connecting your Android phone.
Select Install Selected.
Figure 6. Android SDK and AVD Manager
Accept the licence agreement for each package, and then select Install, as shown in Figure 7:
Figure 7. Android packages to install
The manager will download and install the selected packages.
Create a virtual Android device
The Android SDK and AVD Manager are also used to create and manage the emulator instances to be used with your applications.
From the Virtual Devices page, select New… to create a new AVD. As shown in Figure 8, enter a name, target, SD card size, and HVGA skin:
Figure 8. Creating a new AVD
Select Create AVD.
Download PhoneGap SDK
PhoneGap is an open source hybrid mobile application framework that supports multiple platforms, including Android, iOS, Blackberry, Palm, Symbian, and Windows Phone. With PhoneGap you can easily write a cross-platform mobile application using standard web technologies (HTML, JavaScript, and CSS) and access device features such as the accelerometer or camera from JavaScript. See Resources for a link to information for the supported features, which provides the latest details about PhoneGap.
PhoneGap provides a collection of JavaScript APIs that enables access to many device features not available from the mobile web browser for a typical web application. This is accomplished by using a native wrapper around your web application. PhoneGap combines web application code with the device's browser renderer to produce a native application that can be deployed to an app store and installed on the device.
Features included as part of the PhoneGap API enable access to a device's accelerometer, audio and video capture, camera, compass, contacts, file, geolocation, network, notification, and storage. The PhoneGap API documentation (see Resources) has more details and examples.
After you download PhoneGap (see Resources), you'll later copy the code into your Android project (in the "Create a new Android project" section).
Download Dojo
Dojo Toolkit is an open source JavaScript toolkit designed for rapid development of websites and applications that are loaded and run in a web browser.
Since mobile web browsers are not as capable as desktop browsers, Dojo includes a mobile version, called Dojo Mobile. It is optimized for mobile web browsers, and it provides many UI widgets and themes you can use to style your mobile application to mimic a native application.
Some key features of Dojo Mobile include:
- Lightweight loading of widgets due to the Dojo Mobile parser
- CSS3 animations and transitions for native-like application experience on high-end iOS and Android devices
- Themes included for both iOS and Android look and feel
- Compatibility with non-CSS3-compatible devices and browsers
- Full declarative syntax, allowing for an easy learning curve
- A large suite of widgets, with even more in the upcoming Dojo Mobile 1.7
For this article, you will need to download Dojo 1.6 (see Resources).
Create a new Android project
Now that the development environment is set up, let's start by creating a simple Android application.
From Eclipse, select File > New > Other…, then Android > Android project. You should see the dialog shown in Figure 9.
Figure 9. New Android project
As shown in Figure 9, enter a project name, select a build target, and enter the application name, package name, and activity name. Click on Finish to create the project.
Add the PhoneGap library
You now have a simple Android application. Before you can write a PhoneGap application, you need to add the PhoneGap library. There are two files: a JavaScript file that contains the PhoneGap API called by our application, and a native JAR file containing the native implementation for the PhoneGap API.
- Expand the AndroidPhoneGap project tree view, as shown in Figure 10:
Figure 10. Android project with PhoneGap library
- Create the directory \assets\www. Also create the directory \libs if it doesn't already exist.
- Unzip the PhoneGap download and locate the Android subdirectory.
- Copy the three PhoneGap library files for Android to the following Eclipse project folders:
- Copy phonegap-1.0.0.jar to \libs\phonegap-1.0.0.jar
- Copy phonegap-1.0.0.js to \assets\www\phonegap-1.0.0.js
- Copy xml/plugins.xml to \res\xml\plugins.xml
Even though the PhoneGap JAR file is copied into the project, you also need to add it to the project's build path.
- Select Project > Properties > Java Build Path > Libraries > Add JARs….
- Add phonegap-1.0.0.jar by navigating to it in the project, as shown in Figure 11:
Figure 11. Adding PhoneGap JAR
The final step in preparing the example Android application to use PhoneGap
is to modify App.java. Because a PhoneGap application is written in HTML
and JavaScript, you need to change App.java to load your HTML file using
loadUrl(), as shown in Listing 2. You can edit App.java by double-clicking on App.java in the tree view
shown in Figure 10.
Listing 2. App.java
Package com.ibm.swgs; import android.os.Bundle; import com.phonegap.*; public class App extends DroidGap //Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //setContentView(R.layout.main); super.loadUrl(""); } }
Write the PhoneGap application
You're now ready to start writing the PhoneGap application. For Android,
files under the asset directory are referenced using. As specified in
loadUrl() in Listing 2, you need
to create an index.html file under assets/www.
After creating index.hml, enter the contents of Listing 3 below.
Listing 3. index.html
<!DOCTYPE HTML> <html> <head> <title>PhoneGap</title> <script type="text/javascript" charset="utf-8" src="phonegap-1.0.0.js"></script> </head> <body onload='document.addEventListener("deviceready", deviceInfo, false);'> <script> function deviceInfo() { document.write("<h1>This is Phonegap 1.0.0 running on "+device.platform+" "+device.version+"!</h1>"); } </script> </body> </html>
A brief explanation of index.html is in order. Before calling any PhoneGap
APIs, we must wait for the
deviceready event,
which indicates that the native portion of PhoneGap has been initialized
and is ready. In Listing 3, the
onload callback registers for
deviceready. When it fires, we write out the
device's OS and version.
Since PhoneGap uses native features that are protected by permissions, you
need to modify AndroidManifest.xml to include these
uses-permission tags. You also need to specify
the
support-screens tag, the
android:configChanges property, and the
com.phonegap.DroidGap activity tag, as shown in
Listing 4:
Listing 4. AndroidManifest.xml
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <supports-screens android: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> <activity android: <intent-filter> </intent-filter> </activity> </application> </manifest>
Run the application in the Android emulator
The PhoneGap application is now ready to run. Select Run > Run As > Android Application, and you should see something similar to Figure 12
Figure 12. Android emulator
Eclipse automatically builds the application, launches the emulator, and installs and runs it on the emulator.
The emulator can take several minutes to start up. To speed development, keep the emulator running until you are done with your development session. Eclipse will automatically use a running emulator instead of launching a new one.
Run the application on an Android phone
If you have an Android phone, you can run the PhoneGap application on your device. However, before you can use your phone for development, you need to turn on USB debugging, as follows:
- Go to the Home screen and select Menu.
- Select Settings > Applications > Development.
- Enable USB debugging.
- You also need to declare the application as debuggable in the Android Manifest. Edit the AndroidManifest.xml file to add
android:debuggable="true"to the
<application>element.
- Attach an Android phone to your development machine with USB.
- To run the application, select Run As > Android Application.
You will be prompted to choose between the emulator or real device as the target. Select the Android phone, as shown in Figure 13:
Figure 13. Select the device
Once the application has been downloaded and installed on your phone, it will be launched, as shown in Figure 14:
Figure 14. Application running on device
Use the Dalvik Debug Monitor Server (DDMS)
The ADT plugin includes a Dalvik Debug Monitor Server (DDMS) perspective for debugging. DDMS, which can be used to track and debug the application flow, can be used with the emulator or a real device.
The DDMS perspective can be started from Eclipse by selecting Window > Open Perspective > Other... > DDMS. Figure 15 shows an example.
Figure 15. DDMS window inside Eclipse
The DDMS can also be started using command line from the location of the Android SDK.
- For Windows: C:\android-sdk-windows\tools\ddms.bat
- For OSX: .../android-sdk-mac-86/tools/ddms
Figure 16. Standalone debugger
From DDMS, you can:
- View the log console
- Show the status of processes on the device
- Examine thread information
- View heap usage of a process
- Force garbage collection
- Track memory allocation of objects
- Perform method profiling
- Work with a device's file system
- Perform screen captures of the device
- Emulate phone operations
See the DDMS documentation (in Resource) for more information.
Extend the project with Dojo
Dojo is a JavaScript toolkit that offers several benefits to mobile applications. It provides themes that mimic native mobile applications, and it has user interface (UI) containers and widgets that simplify development of your application UI.
Setup for Dojo
To use Dojo, you need to copy it into the example project.
Create the following directories, as shown in Figure 17.
- \assets\www\libs
- \assets\www\libs\dojo
- \assets\www\libs\dojo\dojo
- \assets\www\libs\dojo\dojox
Copy the following Dojo files:
- dojox\mobile.js to \assets\www\libs\dojo\dojox directory
- dojox\mobile directory to \assets\www\libs\dojo\dojox directory
- dojo\dojo.js to \assets\www\libs\dojo\dojo directory
Figure 17. Android project tree with Dojo added
To load Dojo, you need to edit index.html and include the lines in Listing 5 in the
<head>
section before the phonegap-1.0.0.js script tag.
Listing 5. Adding Dojo to index.html
<meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1, minimum-scale=1,user-scalable=no"/> <link rel="stylesheet" href="libs/dojo/dojox/mobile/themes/android/android.css" type="text/css" media="screen" title="no title" charset="utf-8"> <script type="text/javascript" src="libs/dojo/dojo/dojo.js" djConfig="parseOnLoad:true"> </script>
After the phonegap-1.0.0.js script tag, include the
require statements, as shown in Listing 6, for the Dojo mobile parser.
Listing 6. Modify index.html to load mobile Dojo
<script type="text/javascript"> dojo.require("dojox.mobile.parser"); dojo.require("dojox.mobile"); </script>
Updating the application
Replace the existing
<body> tag in
index.html with a new
<body> tag that
contains two simple <div> sections, as shown in Listing 7:
Listing 7. Modify body of index.html
<body> <!-- ACCIDENT TOOLKIT PAGE --> <div dojoType="dojox.mobile.View" id="accHelp" selected="true"> <h1 dojoType="dojox.mobile.Heading">Accident</h1> <div class="text">If you are in an accident, you should first move to a safe location. Below are some additional actions you can take:</div> <ul dojoType="dojox.mobile.RoundRectList"> <li dojoType="dojox.mobile.ListItem" onclick="window.location='geo:0,0?q=police';">Call the Police</li> <li dojoType="dojox.mobile.ListItem" onclick="window.location='geo:0,0?q=towing';">Call for a Tow Truck</li> <li dojoType="dojox.mobile.ListItem" moveTo="accInfo" transition="slide" onClick="itemClicked();">Exchange Driver Info</li> <li dojoType="dojox.mobile.ListItem" moveTo="accInfo" transition="slide" onClick="itemClicked();">Record Accident Location</li> <li dojoType="dojox.mobile.ListItem" moveTo="accInfo" transition="slide" onClick="itemClicked();">Take Photos of Accident</li> </ul> </div> <!-- EXCHANGE DRIVER INFO PAGE --> <div dojoType="dojox.mobile.View" id="accInfo"> <h1 dojoType="dojox.mobile.Heading" back="Accident" moveTo="accHelp" onClick="console.log('Going back');">Driver</h1> <h2 dojoType="dojox.mobile.RoundRectCategory">Other Driver Info</h2> </div> </body>
As shown in Listing 8, add a simple JavaScript function,
itemClicked() after the last
<div> tag to log to the console.
Listing 8. Onclick handler
<script> function itemClicked() { console.log("itemClicked()"); } </script>
Running on an emulator
Run the application as an Android application by right-clicking on the project and selecting Run as > Android Application.
Figure 18. Application running on emulator
Each screen in a Dojo application is defined by a
div element with
dojoType='dojox.mobile.View', as shown in Listing 9. The initial screen is identified with the
attribute
selected='true'.
The title of the screen is defined by a
<h1
dojoType='dojox.mobile.Heading'>
tag.
Listing 9. Defining screen and title
<div dojoType="dojox.mobile.View" id="accHelp" selected="true"> <h1 dojoType="dojox.mobile.Heading">Accident</h1> </div>
Notice the
> on the last three list items. It
is an indicator that another Dojo screen will be loaded.
A list item tag with
dojoType='dojox.mobile.ListItem' is used to
display a list of items that can be selected, as shown in Listing 10. It is rendered as a native selection list. The
moveTo attribute specifies which
div to display, and the
transition attribute specifies how it is to be
moved into view.
Listing 10. List item to load a new screen
<li dojoType="dojox.mobile.ListItem" moveTo="accInfo" transition="slide" onClick="itemClicked();">Exchange Driver Info</li>
Select Exchange Driver Info, which will hide the current
div and show the target div with
id='accInfo'. You should see the Driver screen
slide into view, as shown in Figure 19
Figure 19. Driver information screen
The list items can be used for more than loading other screens. For
example, the
onclick handler can be used to
display a Google map with a search for the nearest police station. Listing 11 shows the code.
To return back to the previous screen, select the Accident button in the title.
Listing 11. List item to load a Google map
<li dojoType="dojox.mobile.ListItem" onclick="window.location='geo:0,0?q=police';">Call the Police</li>
Many mobile devices support the geo: protocol. By loading a URI of the form
geo:lat,lng?q=query, the native Google map
service will be displayed.
Select Call the Police on the emulator, which will display a map of the nearest police station, as shown in Figure 20:
Figure 20. Search for nearest police station
Your location may be different, depending on the latitude and longitude entered under the Emulator Control in DDMS. Figure 21 shows the location settings.
Figure 21. Location settings in DDMS
Run on a device
If you have an Android phone connected, run the application on your device as described in the "Run the application on an Android phone" section. As shown in Figure 22, the screens will look similar to those on the emulator. The police search should return a police station near your current location.
Figure 22. Application running on device
Conclusion
In this article, you learned how to combine PhoneGap and Mobile Dojo to rapidly create a hybrid mobile application for Android that looks and behaves like a typical Android application. You could write it quickly because we used HTML and JavaScript instead of Java. Given that the web code remains the same across all mobile operating systems, this hybrid application could be easily built for iOS and Blackberry with minimal effort.
Stay tuned for Part 2 in this series, which will cover writing a mobile insurance application using Dojo and PhoneGap.
Resources
Learn
- Using DDMS for Android debugging explains how DDMS interacts with a debugger.
- Get DDMS documentation on the Android developer's site.
- Explore the PhoneGap:
- The PhoneGap Get Started Guide covers installation, setup, and deployment.
- Access all the Dojo documentation.
- "Get started with Dojo Mobile 1.6" (developerWorks, Jun 2011) shows how to include and use Dojo Mobile widgets and components in your applications. Also learn how to wrap your web application in a native application using PhoneGap.
- developerWorks Web development zone: Find articles covering various Web-based solutions.
- developerWorks podcasts: Listen to interesting interviews and discussions for software developers.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
Get products and technologies
- Download Eclipse.
- Get Dojo 1.6.
- Download IBM Rational Application Developer 8.0.3 (Trial version).
- Download Java SE JDK.
- Get the Android SDK download and installation guide.
- Download the ADT Plugin for Eclipse and installation guide.
-. | http://www.ibm.com/developerworks/library/wa-mobappdev1/ | CC-MAIN-2014-23 | en | refinedweb |
Contents
Rename "nobody" user
Summary
Use "nobody:nobody" as the names for the kernel overflow UID:GID pair, and retire the old "nfsnobody" name and the old "nobody:nobody" pair with 99:99 numbers.
Owner
- Name: Zbigniew Jędrzejewski-Szmek
- Name: Lennart Poettering
- Release notes ticket: #104
Current status
Detailed Description
Status quo: Fedora statically defines "nobody:nobody" pair with uid:gid of 99:99 in setup.rpm, and "nfsnobody:nfsnobody" pair with uid:gid of 65534:65534 in nfs-utils.rpm.
This is problematic in a few different ways:
- 65534:65534 is used by the kernel as the overflow identifier, when some UID cannot be represented in the current namespace. This applies to both NFS, but probably more commonly nowadays to UIDs outside of the current user namespace (e.g. when a file or process owned by a user from outside of a container). Calling this "nfsnobody" is misleading.
- the name for the overflow user is only defined when nfs-utils.rpm is installed. In particular in containers people want to minimize the number of packages installed, so nfs-utils is likely not to be installed.
- the static nobody:nobody user/group pair was used for various services for which weren't "worthy" of creating a dedicated user. This is a severely misguided concept, because all processes of the nobody user can ptrace and otherwise interact with each other. Separate users for each service should be used instead, either normal allocated users or systemd's DynamicUser's.
- other distributions use either nobody:nobody or nobody:nogroup for the overflow uid:gid pair, and the different naming in Fedora is confusing and can lead to incorrect use.
We propose to:
- stop using nfsnobody for the overflow uid/gid names
- stop using nobody for the static user 99 and group 99
- use the nobody:nobody pair of names for 65534:65534
Changing existing systems is hard, so this change would apply only to new systems. "New" means systems which have neither the old "nobody" user with uid 99 nor the nfsnobody user defined. During package installation/upgrade a scriptlet would check if either of those two conditions is encountered, and if it is, keep current behaviour (nobody=99, nfsnobody=65534), and otherwise, define nobody=65534.
On "new" systems, the mapping for nobody:nobody would be implemented in two redundant ways:
- as a static allocation in /etc/passwd and /etc/group managed by setup.rpm
- dynamically provided by the nss-systemd module (by compiling systemd with -Dnobody-user=nobody -Dnobody-group=nobody).
On "old" systems a flag would be set from an scriptlet to tell systemd to _not_ provide the "nobody" mapping, so that the existing mapping is used.
Benefit to Fedora
The name for the kernel overflow uid and gid will be always provided, and the name will not be misleading. Unsecure use of the nobody user will be eliminated.
Scope
- Proposal owners:
- recompile systemd with -Dnobody-user=nobody -Dnobody-group=nobody
- patch systemd to support disabling the mapping for nobody in nss-systemd and implement this check in upgrade scriptlets
- propose patches for setup.rpm to add the checks and new mapping listed in Detailed Description on update
(nfs-utils doesn't need to be changed, it's scriptlet will simply fail if a user with uid 65534 already exists.)
- Other developers: watch for regressions
- Release engineering: #7258
- List of deliverables: N/A
- Policies and guidelines: nothing
( already says "Note that system services packaged for Fedora MUST NOT run as the nobody user" so no changes are required there.)
- Trademark approval: N/A (not needed for this Change)
Upgrade/compatibility impact
Things should mostly work OK. The change only applies to "new" systems, which didn't have the old definitions. If something expects either "nfsnobody" to be defined, or hardcodes nobody to uid 99, it will be broken. But such things were already broken, so let's hope they are rare.
How To Test
Check if "getent passwd nobody" or "getent passwd 65534" return something like
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
on "new" systems, and the old values on "old" systems.
User Experience
Should not be noticeable by users, except that in some circumstances in containers files which were shown with numeric uid and gid will be shown as owned by nobody:nobody. Files which were shown as owned by "nfsnobody" would now we owned by "nobody".
Dependencies
Contingency Plan
- Contingency mechanism: undo all changes and keep using nfsnobody:nfsnobody as the overflow user/group names
- Contingency deadline: beta freeze
- Blocks release? Yes
- Blocks product? all products
Documentation
Previous discussions:
Release Notes
TBD | https://www.fedoraproject.org/wiki/Changes/RenameNobodyUser | CC-MAIN-2020-29 | en | refinedweb |
About this tutorial
This tutorial describes how to make client calls and write servers in C++ using the Low-Level C++ Bindings (LLCPP).
Getting Started has a walk-through of using the bindings with an example FIDL library. The reference section documents the detailed bindings interface and design.
See Comparing C, Low-Level C++, and High-Level C++ Language Bindings for a comparative analysis of the goals and use cases for all the C-family language bindings.
Two build setups exist in the source tree: the Zircon build and the Fuchsia build. The LLCPP code generator is not supported by the Zircon build. Therefore, the steps to use the bindings depend on where the consumer code is located:
- Code is outside
zircon/: Add
//[library path]:[library name]_llcppto the GN dependencies e.g.
"//sdk/fidl/fuchsia.math:fuchsia.math_llcpp", and the bindings code will be automatically generated as part of the build.
- Code is inside
zircon/: Add a GN dependency of the form:
"$zx/system/fidl/[library-name]:llcpp", e.g.
"$zx/system/fidl/fuchsia-mem:llcpp", and the bindings code will be automatically generated as part of the build.
Preliminary concepts
Decoded message: A FIDL message in decoded form is a contiguous buffer that is directly accessible by reinterpreting the memory as the corresponding LLCPP FIDL type. That is, all pointers point within the same buffer, and the pointed objects are in a specific order defined by the FIDL wire-format. When making a call, a response buffer is used to decode the response message.
Encoded message: A FIDL message in encoded form is an opaque contiguous buffer plus an array of handles. The buffer is of the same length as the decoded counterpart, but pointers are replaced with placeholders, and handles are moved to the accompanying array. When making a call, a request buffer is used to encode the request message.
Message linearization: FIDL messages have to be in a contiguous buffer packed according to the wire-format. When making a call however, the arguments to the bindings code and out-of-line objects are usually scattered in memory, unless careful attention is spent to follow the wire-format order. The process of walking down the tree of objects and packing them is termed linearization, and usually involves
O(message size)copying.
Message layout: The in-memory layout of LLCPP structures is the same as the layout of the wire format. The LLCPP objects can be thought of as a view over the encoded message..
Message ownership: LLCPP objects use
tracking_ptrsmart pointers to manage ownership and track whether an object is heap allocated and owned or user-managed and unowned.
Generated API overview
Low-Level C++ bindings are full featured, and support control over allocation as well as zero-copy encoding/decoding. (Note that contrary to the C bindings they are meant to replace, the LLCPP bindings cover non-simple messages.)
Let's use this FIDL protocol as a motivating example:
// fleet.fidl library fuchsia.fleet; struct Planet { string name; float64 mass; handle<channel> radio; };
The following code is generated (simplified for readability):
// fleet.h struct Planet { fidl::StringView name; double mass; zx::channel radio; };
Note that
string maps to
fidl::StringView, hence the
Planet struct
will not own the memory associated with the
name string. Rather, all strings
point within some buffer space that is managed by the bindings library, or that
the caller could customize. The same goes for the
fidl::VectorView<Planet>
in the code below.
Continuing with the FIDL protocol:
// fleet.fidl continued... protocol SpaceShip { SetHeading(int16 heading); ScanForPlanets() -> (vector<Planet> planets); DirectedScan(int16 heading) -> (vector<Planet> planets); -> OnBeacon(int16 heading); };
The following code is generated (simplified for readability):
// fleet.h continued... class SpaceShip final { public: struct SetHeadingRequest final { fidl_message_header_t _hdr; int16_t heading; }; struct ScanForPlanetsResponse final { fidl_message_header_t _hdr; fidl::VectorView<Planet> planets; }; using ScanForPlanetsRequest = fidl::AnyZeroArgMessage; struct DirectedScanRequest final { fidl_message_header_t _hdr; int16_t heading; }; struct DirectedScanResponse final { fidl_message_header_t _hdr; fidl::VectorView<Planet> planets; }; class SyncClient final { /* ... */ }; class Call final { /* ... */ }; class Interface { /* ... */ }; static bool TryDispatch(Interface* impl, fidl_msg_t* msg, fidl::Transaction* txn); static bool Dispatch(Interface* impl, fidl_msg_t* msg, fidl::Transaction* txn); class ResultOf final { /* ... */ }; class UnownedResultOf final { /* ... */ }; class InPlace final { /* ... */ }; // Generated classes for thread-safe async-capable client. struct AsyncEventHandlers { std::variant<fit::callback<void(int16_t)>, fit::callback<void(fidl::DecodedMessage<OnBeaconResponse>)>> on_beacon; }; class ScanForPlanetsResponseContext { /* ... */ }; class DirectedScanResponseContext { /* ... */ }; class ClientImpl { /* ... */ }; };
Notice that every request and response is modelled as a
struct:
SetHeadingRequest,
ScanForPlanetsResponse, etc.
In particular,
ScanForPlanets() has a request that contains no arguments, and
we provide a special type for that,
fidl::AnyZeroArgMessage.
Following those, there are three related concepts in the generated code:
SyncClient: A class that owns a Zircon channel, providing methods to make requests to the FIDL server.
Call: A class that contains static functions to make sync FIDL calls directly on an unowned channel, avoiding setting up a
SyncClient. This is similar to the simple client wrappers from the C bindings, which take a
zx_handle_t.
Interfaceand
[Try]Dispatch: A server should implement the
Interfacepure virtual class, which allows
Dispatchto call one of the defined handlers with a received FIDL message.
[Unowned]ResultOf are "scoping" classes
containing return type definitions of FIDL calls inside
SyncClient and
Call.
This allows one to conveniently write
ResultOf::SetHeading to denote the
result of calling
SetHeading.
InPlace is another "scoping" class that houses functions
to make a FIDL call with encoding and decoding performed in-place directly on
the user buffer. It is more efficient than those
SyncClient or
Call, but
comes with caveats. We will dive into these separately.
Client API
Sync client
(Protocol::SyncClient)
The following code is generated for
SpaceShip::SyncClient. Each FIDL method
always correspond to two overloads which differ in memory management strategies,
termed flavors in LLCPP: managed flavor and caller-allocating flavor.
class SyncClient final { public: SyncClient(zx::channel channel); // FIDL: SetHeading(int16 heading); ResultOf::SetHeading SetHeading(int16_t heading); UnownedResultOf::SetHeading SetHeading(fidl::BytePart request_buffer, int16_t heading); // FIDL: ScanForPlanets() -> (vector<Planet> planets); ResultOf::ScanForPlanets ScanForPlanets(); UnownedResultOf::ScanForPlanets ScanForPlanets(fidl::BytePart response_buffer); // FIDL: DirectedScan(int16 heading) -> (vector<Planet> planets); ResultOf::DirectedScan DirectedScan(int16_t heading); UnownedResultOf::DirectedScan DirectedScan(fidl::BytePart request_buffer, int16_t heading, fidl::BytePart response_buffer); };
The one-way FIDL method
SetHeading(int16 heading) maps to:
ResultOf::SetHeading SetHeading(int16_t heading): This is the managed flavor.. Here is an example of using it:
// Create a client from a Zircon channel. SpaceShip::SyncClient client(zx::channel(client_end)); // Calling |SetHeading| with heading = 42. SpaceShip::ResultOf::SetHeading result = client.SetHeading(42); // Check the transport status (encoding error, channel writing error, etc.) if (result.status() != ZX_OK) { // Handle error... }
In general, the managed flavor is easier to use, but may result in extra allocation. See ResultOf for details on buffer management.
UnownedResultOf::SetHeading SetHeading(fidl::BytePart request_buffer, int16_t heading): This is the caller-allocating flavor, which defers all memory allocation responsibilities to the caller. Here we see an additional parameter
request_bufferwhich is always the first argument in this flavor. The type
fidl::BytePartreferences a buffer address and size. It will be used by the bindings library to construct the FIDL request, hence it must be sufficiently large. The method parameters (e.g.
heading) are linearized to appropriate locations within the buffer. If
SetHeadinghad a return value, this flavor would ask for a
response_buffertoo, as the last argument. Here is an example of using it:
// Call SetHeading with an explicit buffer, there are multiple ways... // 1. On the stack fidl::Buffer<SetHeadingRequest> request_buffer; auto result = client.SetHeading(request_buffer.view(), 42); // 2. On the heap auto request_buffer = std::make_unique<fidl::Buffer<SetHeadingRequest>>(); auto result = client.SetHeading(request_buffer->view(), 42); // 3. Some other means, e.g. thread-local storage constexpr uint32_t request_size = fidl::MaxSizeInChannel<SetHeadingRequest>(); uint8_t* buffer = allocate_buffer_of_size(request_size); fidl::BytePart request_buffer(/* data = */buffer, /* capacity = */request_size); auto result = client.SetHeading(std::move(request_buffer), 42); //.
The two-way FIDL method
ScanForPlanets() -> (vector<Planet> planets) maps to:
ResultOf::ScanForPlanets ScanForPlanets(): This is the managed flavor. Different from the C bindings, response arguments are not returned via out-parameters. Instead, they are accessed through the return value. Here is an example to illustrate:
// It is cleaner to omit the |UnownedResultOf::ScanForPlanets| result type. auto result = client.ScanForPlanets(); // Check the transport status (encoding error, channel writing error, etc.) if (result.status() != ZX_OK) { // handle error & early exit... } // Obtains a pointer to the response struct inside |result|. // This requires that the transport status is |ZX_OK|. SpaceShip::ScanForPlanetsResponse* response = result.Unwrap(); // Access the |planets| response vector in the FIDL call. for (const auto& planet : response->planets) { // Do something with |planet|... }
When the managed flavor is used, the returned object (
resultin this example) manages ownership of all buffer and handles, while
result.Unwrap()returns a view over it. Therefore, the
resultobject must outlive any references to the response.
UnownedResultOf::ScanForPlanets ScanForPlanets(fidl::BytePart response_buffer): The caller-allocating flavor receives the message into
response_buffer. Here is an example using it:
fidl::Buffer<ScanForPlanetsResponse> response_buffer; auto result = client.ScanForPlanets(response_buffer.view()); if (result.status() != ZX_OK) { /* ... */ } auto response = result.Unwrap(); // |response->planets| points to a location within |response_buffer|.
The buffers passed to caller-allocating flavor do not have to be initialized. A buffer may be re-used multiple times, as long as it is large enough for the calls involved.
Async-capable Client (
fidl::Client<Protocol>)
This client is thread-safe and supports both synchronous and asynchronous calls as well as asynchronous event handling. It also supports use with a multi-threaded dispatcher.
Creation
A client is created with a client-end
zx::channel, an
async_dispatcher_t*,
an optional hook (
OnClientUnboundFn) to be invoked when the channel is
unbound, and an optional
AsyncEventHandlers containing hooks to
be invoked on FIDL events.
Client<SpaceShip> client; zx_status_t status = client.Bind( std::move(client_end), dispatcher, // OnClientUnboundFn [&](fidl::UnboundReason, zx_status_t, zx::channel) { /* ... */ }, // AsyncEventHandlers { .on_beacon = [&](int16_t) { /* ... */ } });
Unbinding
The channel may be unbound automatically in case of the server-end being closed
or due to an invalid message being received from the server. You may also
actively unbind the channel through
client.Unbind().).
NOTE: If you shutdown the dispatcher while there are any active bindings, the
unbound hook MAY be executed on the thread executing shutdown. As such, you MUST
not take any locks which could be taken by hooks provided to
fidl::Client APIs
while executing
async::Loop::Shutdown()/async_loop_shutdown(). (You should
probably ensure that no locks are held around shutdown anyway since it joins all
dispatcher threads, which may take locks in user code).
Outgoing FIDL methods
You can invoke outgoing FIDL APIs through the
fidl::Client<SpaceShip>
instance, e.g.
client->SetHeading(0). The full generated API is given below:
class ClientImpl final { public: fidl::StatusAndError SetHeading(int16_t heading); fidl::StatusAndError SetHeading(fidl::BytePart _request_buffer, int16_t heading); fidl::StatusAndError ScanForPlanets( fit::callback<void(fidl::VectorView<Planet> planets)> _cb); fidl::StatusAndError ScanForPlanets(ScanForPlanetsResponseContext* _context); ResultOf::ScanForPlanets ScanForPlanets_Sync(int16_t heading); UnownedResultOf::ScanForPlanets ScanForPlanets_Sync( fidl::BytePart _response_buffer, int16_t heading); fidl::StatusAndError DirectedScan(fit::callback<void(fidl::VectorView<Planet> planets)> _cb); fidl::StatusAndError DirectedScan(DirectedScanResponseContext* _context); ResultOf::DirectedScan DirectedScan_Sync(int16_t heading); UnownedResultOf::DirectedScan DirectedScan_Sync( fidl::BytePart _request_buffer, int16_t heading, fidl::BytePart _response_buffer); };
Note that the one-way and synchronous two-way FIDL methods have a similar API to
the
SyncClient versions. Aside from one-way methods directly returning
fidl::StatusAndError and the added
_Sync on the synchronous methods, the
behavior is identical.
Asynchronous APIs
The managed flavor of the asynchronous two-way APIs simply takes a
fit::callback hook which is executed on response in a dispatcher thread. The
returned
fidl::StatusAndError refers just to the status of the outgoing call.
auto status = client->DirectedScan(0, [&]{ /* ... */ });
The caller-allocated flavor enables you to provide the storage for the
callback as well as any associated state. This is done through the generated
virtual
ResponseContext classes:
class DirectedScanResponseContext : public fidl::internal::ResponseContext { public: virtual void OnReply(fidl::DecodedMessage<DirectedScanResponse> msg) = 0; };
You can derive from this class, implementing
OnReply() and
OnError()
(inherited from
fidl::internal::ResponseContext). You can then allocate an
object of this type as required, passing a pointer to it to the API. The object
must stay alive until either
OnReply() or
OnError() is invoked by the
Client.
NOTE: If the
Client is destroyed with outstanding asynchronous transactions,
OnError() will be invoked for all of the associated
ResponseContexts.
Static functions
(Protocol::Call)
The following code is generated for
SpaceShip::Call:
class Call final { public: static ResultOf::SetHeading SetHeading(zx::unowned_channel client_end, int16_t heading); static UnownedResultOf::SetHeading SetHeading(zx::unowned_channel client_end, fidl::BytePart request_buffer, int16_t heading); static ResultOf::ScanForPlanets ScanForPlanets(zx::unowned_channel client_end); static UnownedResultOf::ScanForPlanets ScanForPlanets(zx::unowned_channel client_end, fidl::BytePart response_buffer); static ResultOf::DirectedScan DirectedScan(zx::unowned_channel client_end, int16_t heading); static UnownedResultOf::DirectedScan DirectedScan(zx::unowned_channel client_end, fidl::BytePart request_buffer, int16_t heading, fidl::BytePart response_buffer); };
These methods are similar to those found in
SyncClient. However, they do not
own the channel. This is useful if one is migrating existing code from the
C bindings to low-level C++. Another use case is when implementing C APIs
which take a raw
zx_handle_t. For example:
// C interface which does not own the channel. zx_status_t spaceship_set_heading(zx_handle_t spaceship, int16_t heading) { auto result = fuchsia::fleet::SpaceShip::Call::SetHeading( zx::unowned_channel(spaceship), heading); return result.status(); }
ResultOf and UnownedResultOf
For a method named
Foo,
ResultOf::Foo is the return type of the managed
flavor.
UnownedResultOf::Foo is the return type of the caller-allocating
flavor. Both.
const char* error() constcontains a brief error message when status is not
ZX_OK. Otherwise, returns
nullptr.
- (only for two-way calls)
FooResponse* Unwrap()returns a pointer to the FIDL response message. For
ResultOf::Foo, the pointer points to memory owned by the result object. For
UnownedResultOf::Foo, the pointer points to a caller-provided buffer.
Unwrap()should only be called when the status is
ZX_OK.
Allocation strategy And move semantics
ResultOf::Foo::Foo.
In-place calls
Both the managed flavor and the caller-allocating flavor will copy the
arguments into the request buffer. When there is out-of-line data involved,
message linearization is additionally required to collate them as per the
wire-format. When the request is large, these copying overhead can add up.
LLCPP supports making a call directly on a caller-provided buffer containing
a request message in decoded form, without any parameter copying. The request
is encoded in-place, hence the name of the scoping class
InPlace.
class InPlace final { public: static ::fidl::internal::StatusAndError SetHeading(zx::unowned_channel client_end, fidl::DecodedMessage<SetHeadingRequest> params); static ::fidl::DecodeResult<ScanForPlanets> ScanForPlanets(zx::unowned_channel client_end, fidl::DecodedMessage<ScanForPlanetsRequest> params, fidl::BytePart response_buffer); static ::fidl::DecodeResult<DirectedScan> DirectedScan(zx::unowned_channel client_end, fidl::DecodedMessage<DirectedScanRequest> params, fidl::BytePart response_buffer); };
These functions always take a
fidl::DecodedMessage<FooRequest> which wraps the
user-provided buffer. To use it properly, initialize the request buffer with a
FIDL message in decoded form. In particular, out-of-line objects have to be
packed according to the wire-format, and therefore any pointers in the message
have to point within the same buffer.
When there is a response defined, the generated functions additionally ask for a
response_buffer as the last argument. The response buffer does not have to be
initialized.
// Allocate buffer for in-place call fidl::Buffer<SetHeadingRequest> request_buffer; fidl::BytePart request_bytes = request_buffer.view(); memset(request_bytes.data(), 0, request_bytes.capacity()); // Manually construct the message auto msg = reinterpret_cast<SetHeadingRequest*>(request_bytes.data()); msg->heading = 42; // Here since our message is a simple struct, // the request size is equal to the capacity. request_bytes.set_actual(request_bytes.capacity()); // Wrap with a fidl::DecodedMessage fidl::DecodedMessage<SetHeadingRequest> request(std::move(request_bytes)); // Finally, make the call. auto result = SpaceShip::InPlace::SetHeading(channel, std::move(request)); // Check result.status(), result.error()
Despite the verbosity, there is actually very little work involved.
The buffer passed to the underlying
zx_channel_call system call is in fact
request_bytes. The performance benefits become apparent when say the request
message contains a large inline array. One could set up the buffers once, then
make repeated calls while mutating the array by directly editing the buffer
in between.
Server API
class Interface { public: virtual void SetHeading(int16_t heading, SetHeadingCompleter::Sync completer) = 0; class ScanForPlanetsCompleterBase { public: void Reply(fidl::VectorView<Planet> planets); void Reply(fidl::BytePart buffer, fidl::VectorView<Planet> planets); void Reply(fidl::DecodedMessage<ScanForPlanetsResponse> params); }; using ScanForPlanetsCompleter = fidl::Completer<ScanForPlanetsCompleterBase>; virtual void ScanForPlanets(ScanForPlanetsCompleter::Sync completer) = 0; class DirectedScanCompleterBase { public: void Reply(fidl::VectorView<Planet> planets); void Reply(fidl::BytePart buffer, fidl::VectorView<Planet> planets); void Reply(fidl::DecodedMessage<DirectedScanResponse> params); }; using DirectedScanCompleter = fidl::Completer<DirectedScanCompleterBase>; virtual void DirectedScan(int16_t heading, DirectedScanCompleter::Sync completer) = 0; }; bool TryDispatch(Interface* impl, fidl_msg_t* msg, fidl::Transaction* txn);
The generated
Interface class has pure virtual functions corresponding to the
method calls defined in the FIDL protocol. One may override these functions in
a subclass, and dispatch FIDL messages to a server instance by calling
TryDispatch.
The bindings runtime would invoke these handler functions appropriately.
class MyServer final : fuchsia::fleet::SpaceShip::Interface { public: void SetHeading(int16_t heading, SetHeadingCompleter::Sync completer) override { // Update the heading... } void ScanForPlanets(ScanForPlanetsCompleter::Sync completer) override { fidl::VectorView<Planet> discovered_planets = /* perform planet scan */; // Send the |discovered_planets| vector as the response. completer.Reply(std::move(discovered_planets)); } void DirectedScan(int16_t heading, DirectedScanCompleter::Sync completer) override { fidl::VectorView<Planet> discovered_planets = /* perform a directed planet scan */; // Send the |discovered_planets| vector as the response. completer.Reply(std::move(discovered_planets)); } };
Each handler function has an additional last argument
completer.
It captures the various ways one may complete a FIDL transaction, by sending a
reply, closing the channel with epitaph, etc.
For FIDL methods with a reply e.g.
ScanForPlanets, the corresponding completer
defines up to three overloads of a
Reply() function
(managed, caller-allocating, in-place), similar to the client side API.
The completer always defines a
Close(zx_status_t) function, to close the
connection with a specified epitaph.
NOTE: Each
Completer object must only be accessed by one thread at a time.
Simultaneous access from multiple threads will result in a crash.
Responding asynchronously
Notice that the type for the completer
ScanForPlanetsCompleter::Sync has
::Sync. This indicates the default mode of operation: the server must
synchronously make a reply before returning from the handler function.
Enforcing this allows optimizations: the bookkeeping metadata for making
a reply may be stack-allocated.
To asynchronously make a reply, one may call the
ToAsync() method on a
Sync
completer, converting it to
ScanForPlanetsCompleter::Async. The
Async
completer supports the same
Reply() functions, and may out-live the scope of
the handler function by e.g. moving it into a lambda capture.
void ScanForPlanets(ScanForPlanetsCompleter::Sync completer) override { // Suppose scanning for planets takes a long time, // and returns the result via a callback... EnqueuePlanetScan(some_parameters) .OnDone([completer = completer.ToAsync()] (auto planets) mutable { // Here the type of |completer| is |ScanForPlanetsCompleter::Async|. completer.Reply(std::move(planets)); }); }
Parallel message handling
NOTE: This use-case is currently possible only using the lib/fidl bindings.)); }
Reference
Design
Goals
- Support encoding and decoding FIDL messages with C++17.
- Provide fine-grained control over memory allocation.
- More type-safety and more features than the C language bindings.
- Match the size and efficiency of the C language bindings.
- Depend only on a small subset of the standard library.
- Minimize code bloat through table-driven encoding and decoding.
- Reuse encoders, decoders, and coding tables generated for C language bindings.
Pointers and memory ownership
LLCPP objects use special smart pointers called
tracking_ptr to keep track of memory ownership.
With
tracking_ptr, LLCPP makes it possible for your code to easily set a value and forget
about ownership:
tracking_ptr will take care of freeing memory when it goes out of scope.
These pointers have two states:
- unowned (constructed from an
unowned_ptr_t)
- heap allocated and owned (constructed from a
std::unique_ptr)
When the contents is owned, a
tracking_ptr behaves like a
unique_ptr and the pointer is
deleted on destruction. In the unowned state,
tracking_ptr behaves like a raw pointer and
destruction is a no-op.
tracking_ptr is move-only and has an API closely matching
unique_ptr.
Types of object allocation
tracking_ptr makes it possible to create LLCPP objects with several allocation strategies.
The allocation strategies can be mixed and matched within the same code.
Heap allocation
To heap allocate objects, use the standard
std::make_unique.
An example with an optional uint32 field represented as a
tracking_ptr.
MyStruct s; s.opt_uint32_field = std::make_unique<uint32_t>(123);
This applies to all union and table fields and data arrays within vectors and strings.
Vector and string data arrays must use the array specialization of
std::unique_ptr,
which takes the element count as an argument.
VectorView<uint32_t> vec; vec.set_data(std::make_unique<uint32_t[]>(10));
To copy a collection to a
VectorView, use
heap_copy_vec.
std::vector<uint32_t> vec; fidl::VectorView<uint32_t> vv = heap_copy_vec(vec);
To copy a string to a
StringView, use
heap_copy_str.
std::string_view str = "hello world"; fidl::StringView sv = heap_copy_str(str);
Allocators
FIDL provides an
Allocator API that enables creating
tracking_ptrs to LLCPP objects through a
number of allocation algorithms. Currently,
BufferThenHeapAllocator,
UnsafeBufferAllocator, and
HeapAllocator are available in fidl namespace.
The
BufferThenHeapAllocator allocates from an in-band fixed-size buffer (can be used for stack
allocation), but falls back to heap allocation if the in-band buffer has been exhausted (to avoid
unnecessary unfortunate surprises). Be aware that excessive stack usage can cause its own problems,
so consider using a buffer size that comfortably fits on the stack, or consider putting the whole
BufferThenHeapAllocator on the heap if the buffer needs to be larger than fits on the stack, or
consider using HeapAllocator. Allocations must be assumed to be gone upon destruction of the
BufferThenHeapAllocator used to make them.
The
HeapAllocator always allocates from the heap, and is unique among allocators (so far) in that
any/all of the
HeapAllocator allocations can out-live the
HeapAllocator instance used to make
them.
The
UnsafeBufferAllocator is unsafe in the sense that it lacks heap failover, so risks creating
unfortunate data-dependent surprises unless the buffer size is absolutely guaranteed to be large
enough including the internal destructor-tracking overhead. If the internal buffer is exhausted,
make<>() will panic the entire process. Consider using
BufferThenHeapAllocator instead. Do not
use
UnsafeBufferAllocator without rigorously testing that the worst-case set of cumulative
allocations made via the allocator all fit without a panic, and consider how the rigor will be
maintained as code and FIDL tables are changed.
Example:
BufferThenHeapAllocator<2048> allocator; MyStruct s; s.opt_uint32_field = allocator.make<uint32_t>(123);
The arguments to
allocator.make are identical to the arguments to
std::make_unique.
This also applies to VectorViews.
BufferThenHeapAllocator<2048> allocator; fidl::VectorView<uint32_t> vec; vec.set_data(allocator.make<uint32_t[]>(10));
To copy a collection to a
VectorView using an allocator, use
copy_vec.
BufferThenHeapAllocator<2048> allocator; std::vector<uint32_t> vec; fidl::VectorView<uint32_t> vv = fidl::copy_vec(allocator, vec);
To create a copy of a string using an allocator, use
copy_str.
BufferThenHeapAllocator<2048> allocator; std::string_view str = "hello world"; fidl::StringView sv = fidl::copy_str(allocator, str);
Unowned pointers
In addition to the managed allocation strategies, it is also possible to directly
create pointers to memory unowned by FIDL. This is discouraged, as it is easy to
accidentally create use-after-free bugs.
unowned_ptr exists to explicitly mark
pointers to FIDL-unowned memory.
The
unowned_ptr helper is the recommended way to create
unowned_ptr_ts,
which is more ergonomic than using the
unowned_ptr_t constructor directly.
MyStruct s; uint32_t i = 123; s.opt_uint32_field = fidl::unowned_ptr(&i);
To create a
VectorView from a collection using an unowned pointer to the
collection's data array, use
unowned_vec.
std::vector<uint32_t> vec; fidl::VectorView<uint32_t> vv = fidl::unowned_vec(vec);
To create a
StringView from unowned memory, use
unowned_str.
const char arr[] = {'h', 'e', 'l', 'l', 'o'}; fidl::StringView sv = fidl::unowned_str(arr, 5);
A
StringView can also be created directly from string literals without using
unowned_ptr.
fidl::StringView sv = "hello world";
Code generator
Mapping FIDL types to low-level C++ types
This is the mapping from FIDL types to Low-Level C++ types which the code generator produces.
fidl::StringView
Defined in lib/fidl/llcpp/string_view.h
Holds a reference to a variable-length string stored within the buffer. C++ wrapper of fidl_string. Does not own the memory of the contents.
fidl::StringView may be constructed by supplying the pointer and number of
UTF-8 bytes (excluding trailing
\0) separately. Alternatively, one could pass
a C++ string literal, or any value which implements
[const] char* data()
and
size(). The string view would borrow the contents of the container.
It is memory layout compatible with fidl_string.
fidl::VectorView<T>
Defined in lib/fidl/llcpp/vector_view.h
Holds a reference to a variable-length vector of elements stored within the buffer. C++ wrapper of fidl_vector. Does not own the memory of elements.
fidl::VectorView may be constructed by supplying the pointer and number of
elements separately. Alternatively, one could pass any value which supports
std::data, such as a
standard container, or an array. The vector view would borrow the contents of
the container.
It is memory layout compatible with fidl_vector.
fidl::Array<T, N>
Defined in lib/fidl/llcpp/array.h
Owns a fixed-length array of elements.
Similar to
std::array<T, N> but intended purely for in-place use.
It is memory layout compatible with FIDL arrays, and is standard-layout. The destructor closes handles if applicable e.g. it is an array of handles.
Tables
The following example table will be used in this section:
table MyTable { 1: uint32 x; 2: uint32 y; };
Tables can be built using the associated table builder. For
MyTable, the associated builder
would be
MyTable::Builder which can be used as follows:
MyTable table = MyTable::Builder(std::make_unique<MyTable::Frame>()) .set_x(std::make_unique<uint32_t>(10)) .set_y(std::make_unique<uint32_t(20)) .build();
MyTable::Frame is the table's
Frame - essentially its internal storage. The internal storage
needs to be allocated separately from the builder because LLCPP maintains the object layout of
the underlying wire format.
In addition to assigning fields with
std::unique_ptr, any of the allocation strategies previously
metioned can be alternatively used.
Bindings library
Dependencies
The low-level C++ bindings depend only on a small subset of header-only parts of the standard library. As such, they may be used in environments where linking against the C++ standard library is discouraged or impossible.
Helper types
fidl::DecodedMessage<T>
Defined in lib/fidl/llcpp/decoded_message.h
Manages a FIDL message in decoded form.
The message type is specified in the template parameter
T.
This class takes care of releasing all handles which were not consumed
(std::moved from the decoded message) when it goes out of scope.
fidl::Encode(std::move(decoded_message)) encodes in-place.
fidl::EncodedMessage<T>
Defined in lib/fidl/llcpp/encoded_message.h Holds a FIDL message in encoded form, that is, a byte array plus a handle table. The bytes part points to an external caller-managed buffer, while the handles part is owned by this class. Any handles will be closed upon destruction.
fidl::Decode(std::move(encoded_message)) decodes in-place.
Example
zx_status_t SayHello(const zx::channel& channel, fidl::StringView text, zx::handle token) { assert(text.size() <= MAX_TEXT_SIZE); // Manually allocate the buffer used for this FIDL message, // here we assume the message size will not exceed 512 bytes. uint8_t buffer[512] = {}; fidl::DecodedMessage<example::Animal::SayRequest> decoded( fidl::BytePart(buffer, 512)); // Fill in header and contents example::Animal::SetTransactionHeaderFor::SayRequest(&decoded); decoded.message()->text = text; // Handle types have to be moved decoded.message()->token = std::move(token); // Encode the message in-place fidl::EncodeResult<example::Animal::SayRequest> encode_result = fidl::Encode(std::move(decoded)); if (encode_result.status != ZX_OK) { return encode_result.status; } fidl::EncodedMessage<example::Animal::SayRequest>& encoded = encode_result.message; return channel.write(0, encoded.bytes().data(), encoded.bytes().size(), encoded.handles().data(), encoded.handles().size()); } | https://fuchsia.dev/fuchsia-src/development/languages/fidl/tutorials/tutorial-llcpp | CC-MAIN-2020-29 | en | refinedweb |
NAME
Create an event pair.
SYNOPSIS
#include <zircon/syscalls.h> zx_status_t zx_eventpair_create(uint32_t options, zx_handle_t* out0, zx_handle_t* out1);
DESCRIPTION
zx_eventpair_create() creates an event pair, which is a pair of objects that
are mutually signalable.
The signals ZX_EVENTPAIR_SIGNALED and ZX_USER_SIGNAL_n (where n is 0 through 7)
may be set or cleared using
zx_object_signal(), which modifies the signals on the
object itself, or
zx_object_signal_peer(), which modifies the signals on its
counterpart.
When all the handles to one of the objects have been closed, the ZX_EVENTPAIR_PEER_CLOSED signal will be asserted on the opposing object.
The newly-created handles will have the ZX_RIGHT_TRANSFER, ZX_RIGHT_DUPLICATE, ZX_RIGHT_READ, ZX_RIGHT_WRITE, ZX_RIGHT_SIGNAL, and ZX_RIGHT_SIGNAL_PEER rights.
Currently, no options are supported, so options must be set to 0.
RIGHTS
TODO(ZX-2399)
RETURN VALUE
zx_eventpair_create() returns ZX_OK on success. On failure, a (negative)
error code is returned.
ERRORS
ZX_ERR_INVALID_ARGS out0 or out1 is an invalid pointer or NULL.
ZX_ERR_NOT_SUPPORTED options has an unsupported flag set (i.e., is not 0).
ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur. | https://fuchsia.dev/fuchsia-src/reference/syscalls/eventpair_create | CC-MAIN-2020-29 | en | refinedweb |
Apple consistently marks "their", "there", "it’s" and several other similar common words as misspelled in all of my apps. Why is this happening and how do I prevent it?
Tag: common
c++ – Compare folders and find common files
I found this Powershell command useful in comparing folders and find common and different files.
Since I really like C and C++, I’ve decided to create a program to do that.
It will get all files in 2 folders given as arguments, will store them in an std::map,is that the correct container?
After, it will compare the 2 maps and give the common files.
Some notes:
The findFiles method should benefit from RAII treatment, but since I have ZERO work or internship experience, I am unable to implement that.
Some functions like finding a file size and iterating over a folder are present in C++ 17, but I use Digital Mars, an old compiler not up to date.
I use this compiler because it is small, provided as a compressed folder aka portable in the mainstream lexicon (even though portable means something else) and its use is straightforward.
I used an online code beautifier for indentation.
The sanitizePath method is used to eliminate trailing “/” or “” from the given path.
Please give all your valuable comments on this work.
#include <iostream> #include <iterator> #include <map> #include <string> #include <sys/stat.h> #include <windows.h> #ifndef INVALID_FILE_ATTRIBUTES constexpr DWORD INVALID_FILE_ATTRIBUTES = ((DWORD)-1); #endif bool IsDir(const std::string &path) { DWORD Attr; Attr = GetFileAttributes(path.c_str()); if (Attr == INVALID_FILE_ATTRIBUTES) return false; return (bool)(Attr & FILE_ATTRIBUTE_DIRECTORY); } std::string sanitizePath(std::string const &input) { auto pos = input.find_last_not_of("/\"); return input.substr(0, pos + 1); } std::map<std::string, unsigned long > findFiles(std::string &spath) { size_t i = 1; WIN32_FIND_DATA FindFileData; std::map<std::string, unsigned long > list; std::string sourcepath = spath + std::string("\*.*"); HANDLE hFind = FindFirstFile(sourcepath.c_str(), &FindFileData); if (hFind != INVALID_HANDLE_VALUE) do { std::string fullpath = std::string(spath) + std::string("\") + std::string(FindFileData.cFileName); if (*(fullpath.rbegin()) == '.') continue; else if (FindFileData.dwFileAttributes &FILE_ATTRIBUTE_DIRECTORY) findFiles(fullpath); else { list(FindFileData.cFileName) = FindFileData.nFileSizeHigh *(MAXWORD + 1) + FindFileData.nFileSizeLow; } } while (FindNextFile(hFind, &FindFileData)); FindClose(hFind); return list; } void displayMap(std::map<std::string, unsigned long > &map) { std::map<std::string, unsigned long>::const_iterator itr; for (itr = map.begin(); itr != map.end(); itr++) std::cout << "File Name: " << itr->first << " Size: " << itr->second << " bytes" << std::endl; } std::map<std::string, unsigned long > map_intersect(std::map<std::string, unsigned long > const &source, std::map<std::string, unsigned long > const &dest) { std::map<std::string, unsigned long > inter; std::map<std::string, unsigned long>::const_iterator iter = dest.begin(); std::map<std::string, unsigned long>::const_iterator end = dest.end(); for (; iter != end; iter++) { if (source.find(iter->first) != source.end()) { inter(iter->first) = iter->second; } } return inter; } std::map<std::string, unsigned long > map_difference(std::map<std::string, unsigned long > const &source, std::map<std::string, unsigned long > const &dest) { std::map<std::string, unsigned long > diff = source; std::map<std::string, unsigned long>::const_iterator iter = dest.begin(); std::map<std::string, unsigned long>::const_iterator end = dest.end(); for (; iter != end; iter++) { if (source.find(iter->first) != source.end()) { diff.erase(iter->first); } } return diff; } int main(int argc, char **argv) { if (argc <= 2) { std::cerr << "No path or filename provided" << std::endl; return EXIT_FAILURE; } const char *source = argv(1); const char *destination = argv(2); if (!IsDir(source)) { std::cerr << "Source path doesn't exist" << std::endl; return EXIT_FAILURE; } if (!IsDir(destination)) { std::cerr << "Destination path doesn't exist" << std::endl; return EXIT_FAILURE; } std::string spath = sanitizePath(source); std::string dpath = sanitizePath(destination); std::cout << "Comparing " << spath << " and " << dpath << std::endl; std::map<std::string, unsigned long > slist, dlist, ilist, diflist; slist = findFiles(spath); dlist = findFiles(dpath); ilist = map_intersect(slist, dlist); diflist = map_difference(slist, dlist); if (ilist.empty()) std::cout << "There is no common files" << std::endl; else { std::cout << "The common files are" << std::endl; displayMap(ilist); } if (diflist.empty()) std::cout << "The 2 folder are the same" << std::endl; return EXIT_SUCCESS; }
code quality – Is it common to have to iterate on a design due to overlooking problems with it?
Iterating a through multiple versions of a design is a great thing to do! It is rare to create a design that has all the good properties at the first try. As software engineers, we should be humble and accept that we will make mistakes or overlook things. It is arrogant to think that you can create good design at your first try.
But as you say, it can be exhausting to work on same piece of code for prolonged period of time. But there might be practices and disciplines that make it more bearable.
Test automation, preferably TDD
This this the one discipline that enables us to actually change the design. By having solid and reliable suite of automated tests, the design can be changed drastically without fear of breaking existing functionality. It is that fear which is most exhausting.
Doing TDD also makes it more likely that you create working and ‘good enough’ design at your first try. This design then requires only small improvements to push it into greatness.
Refactoring
Instead of focusing on changing the whole design, focus on small problems and fix those. Fixing many small problems, will result in big changes in overall design. Making small changes is less mentally exhausting as you get feedback about your design sooner and you can stagger your attention between multiple designs, slowly improving all of them.
Good vs. Perfect
The saying ‘Perfect is the enemy of good.’ comes to mind here. Knowing when to stop trying to improve the design is learned skills. If the design is being used and changed, then you will have lots of small oportunities to improve the design, so you don’t have to invest all that time in the beginning. As long as you follow Boy Scouts rule of ‘Always leave code cleaner than you found it.’, then the design will improve over time.
object oriented – What does “common interface” mean in OOP?
I have seen the term “common interface” used a lot while reading books about OOP.
For example, the book The Essence of Object-Oriented Programming with Java and UML says the following:
Abstract classes usually define a common interface for subclasses by
specifying methods that all subclasses must override and define
My understanding of the term “common interface” is the following:
Assume that we have a superclass (or an
interface or an
abstract class) called
Animal and two subclasses called
Dog and
Cat, and
Animal have two virtual methods called
makeSound() and
move().
Now the common interface would be composed of two methods which are
Animal.makeSound() and
Animal.move().
Assume that we have the following code:
Animal animal1 = new Dog(); animal1.makeSound(); animal1.move(); animal1 = new Cat(); animal1.makeSound(); animal1.move();
The explanation of the above code is the following:
Animal animal1 = new Dog() creates an
Animal common interface and associate a
Dog object with it:
animal1.makeSound() sends an
Animal.makeSound() message to the common interface, and then the common interface sends a
Dog.makeSound() message to the
Dog object:
Same thing happens in the case of
animal1.move() (which is the
Animal.move() message is sent to the common interface, etc.).
animal1 = new Cat() removes the
Dog object from the common interface, and associate a
Cat object with the common interface:
animal1.makeSound() sends an
Animal.makeSound() message to the common interface, and then the common interface sends a
Cat.makeSound() message to the
Cat object:
Same thing happens in the case of
animal1.move() (which is the
Animal.move() message is sent to the common interface, etc.).
Am I correct in my understanding?
database design – common columns in all tables in mysql
I want to create a table like base_table with below columns –
id, created_at, created_by.
and for all other tables, I want created_at and create_by columns available through inheritance.
I don’t want to create these common columns in all other tables.
Why is password confirmation common in password resets and updates?
I’ve seen multiple websites with only one field for the password during registration, whereas there are two fields – Enter Password and Confirm Password – for password reset and update tasks.
Why a confirm-password is quite common in password-reset and update password?
I’ve seen multiple websites with only one field for the password during registration, whereas there are two fields password and confirm-password for password-reset and password-update pages.
authentication – Client certificate common name? Subject alternative name?
For an IoT project, I want to secure client server communication. I want both the server (Apache) and the clients identify/authenticate each other (a client won’t communicate with other clients) before clients can post some data.
There is much less information about client certificates. Besides documentations, there are best practices. I would like to know, how to set common name and subject alternative names for clients, as they won’t have a domain name and a fix IP address.
Do I simply tell the server to ignore a mismatch? Can I use a wild card only CN (CN=*)? I also would like the cert to identify specific client. Server needs to be able to tell apart client 1 from client 2, etc…
Thanks!
sony alpha – Tethering DSLR camera to PC via any common WiFi network
I am aware that it is possible to tether a camera to PC via the WiFi network that is created by the WiFi-enabled camera itself. But I want to know if it is possible to tether by connecting both camera and PC to any other common WiFi network.
Specifically, I am using Sony Alpha6400 and qDSLRDashboard as PC client for tethering. I connected the camera to my home WiFi network (to which my PC is connected). But I do not know how to go ahead. qDSLRDashboard does not seem to recognize the camera connected to same WiFi network.
Note: I have not tried this in Sony Imaging Edge. This question is specific to qDSLRDashboard.
Thank you for your answers.
tripod heads – Which are the most common quick release plate systems out there
The camera has a hole in the bottom that will be meant to take either a 1/4-20 UNC or 3/8-16 UNC threaded screw.
Most attachments for this will either come with both types of screws or will necessitate the use of an adapter if what the item comes with is not fit to your camera. This monopod, for example, has a reversible screw for both. Point is, these attachments are standardized so the tripod world is your oyster, so to speak.
The quick release plate will attach to your camera via one of these screws – so you can use the same plate across any of your cameras. Or, if you’re lazy like me, you’ll buy extra plates and just keep ’em on your bodies.
The plate will be designed to fit whatever head you’re using. They could be custom designed for the head or could be something more standard, like the Arca-Swiss style plate.
That being said, I’ve never tried to mix brands and I have heard stories of one brand’s arca-swiss plate not quite meshing well with another’s arca-swiss head, even though those should be universal.
To summarize – because of the universality of the attachment screw, don’t let this impact your tripod head decision. Buy that for the features you want and then worry about the attachment, whether you need an adapter or not.
If you want universality in the QR plates, then go for a head that supports Arca-Swiss style plates. Though again, be warned that that is no guarantee of a great meshing between the head and plate if you choose to mix brands. | https://proxies-free.com/tag/common/ | CC-MAIN-2020-29 | en | refinedweb |
Jason Seifer
Treehouse Guest Teacher
Orlando, FL
To describe my personality, I'd say good looking.
Topics & Specialties
Courses & Workshops I've Taught
- 20 minWorkshop
Introduction to Bundler
Bundler is the standard application dependency manager for Ruby. In this workshop, you'll learn what problems Bundler solves, how it works, how to use it in your own projects, and more.Viewed
- 1
- 2
Ruby Gems
Gems.
- 1
- 2
- 3
Ruby Core and Standard Library
In Ruby Core and Standard Library, we're going to learn about the different pieces that make up the Ruby distribution
-
- 1
- 2
- 3
- 4
Ruby Modules
Modules are an extremely powerful utility when coding in Ruby. Modules allow you to add behavior to classes, hold constants, add namespaces, and more.
-
- 1
- 2
- 3
Build an Address Book in Ruby
In this course, you'll build a simple command line address book application using Ruby. You'll put to use a lot of skills learned in previous courses to put everything together: objects, classes, blocks, input and output, and more.
- 1
- 2
- 3
Ruby Blocks
In this course, you’ll learn all about blocks in Ruby. Blocks are a piece of syntax that you can use in Ruby to accomplish all kinds of amazing programming feats. Ruby programmers make constant use of blocks so they are an important piece of the language to learn.
- 1
- 2
- 3
Ruby Objects and Classes
Ruby is known as an "Object Oriented" programming language. But what does object oriented mean? In this course, we'll cover the basics of Ruby Classes. We'll learn what classes are, how they are used, and how to write our own.
- 1
- 2
- 3
Ruby Loops
In Ruby Loops, you'll learn how to automatically repeat statements using Ruby. You'll learn about the loop construct, including while loops, until loops, for loops, and more. You'll also learn the basics of iteration and then move on to creating a simple contact list management program.
- 1
- 2
- 3
Ruby Collections. | https://teamtreehouse.com/jasonseifer | CC-MAIN-2020-29 | en | refinedweb |
Spirit is an object-oriented recursive-descent parser generator framework implemented using template meta-programming techniques. Expression templates allow us to approximate the syntax of Extended Backus-Normal[sic] Form (EBNF) completely in C++. [_1]
EBNF is also known as Extended Backus-Naur Form [_2]. EBNF is a metasyntax used to formally describe a language. In this example the language is the set of possible expressions that are used to restrict SQL select statements.
The sample code shown is all real code, shown with permission of the owner (a financial institution that wishes to remain anonymous). This piece of code was chosen as a "proof of concept" to show how Spirit works and how it is implemented, both to the management and to the other developers.
The application is a trading system in a bank, and the piece of code is responsible for interpreting what the user enters in a free-text field in the interface used to specify search restrictions. For example, the user may just want to search for certain instruments, or all trades in books starting with the letters B through D. The function query_parse (shown below) is the old C version that takes this free text and produces one or more "tokens" for generating the SQL where clause.
---- some header.h /*********************************************** * SQL Token: consists of : * 1. logical operator : and, or, like * 2. mathematical operator : <, >, =, <=, * >=, <>, * 3. value : the real value * - i.e. < 30, 30 is the value * Before any cell string gets built into an SQL * sub-clause, it'll be parsed by query_parse() * into a linked-list of SQLTokens, and * query_doit() will build using such SQLTokens, * instead of cell strings directly. **********************************************/ typedef struct _SQLToken { char* logic_op; char* math_op; char* value; struct _SQLToken *next; } SQLToken; ---- source file.cpp static SQLToken* query_parse (char *string) { typedef enum { NEUTRAL, LOP, MOP, VALUE } STATE; char c; int index = 0, blank = 0; SQLToken *token, *tmp=0, *head; // fix compiler warning - tmp STATE state = NEUTRAL; head = sqltoken_alloc(); token = head; while( ( c = string[index] ) && ( c != '\n' ) ) { blank = 0; switch(state) { /*****************************************/ case NEUTRAL: switch(c) { case ' ': case '\t': blank = 1; ++index; break; case '+': case '|': state = LOP; break; case '<': case '>': case '=': case '!': /* if ( first != 0) return NULL; */ /* only the begin of string may have */ /* no LOP first = 1; */ state = MOP; break; default : /* return NULL; */ state = VALUE; break; } /* alloc space for next SQLToken, if needed */ if ((token == NULL) && (!blank)) { token = sqltoken_alloc(); tmp->next = token; } break; /*******************************************/ case LOP: switch(c) { case '|': while ((c != ' ') && (c!= '\0') && (c != '"') && (c != '>') && (c!= '<') && (c != '=') && ( c != '!')) c = string[++index]; strcat(token->logic_op, "or"); break; case '+': while ((c != ' ') && (c!= '\0') && (c != '"') && (c != '>') && (c!= '<') && (c != '=') && ( c != '!')) c = string[++index]; strcat(token->logic_op, "and"); break; default: return NULL; } state = NEUTRAL; if ((c != '"') && (c != '>') && (c!= '<') && (c != '=') && ( c != '!')) index++; break; /*******************************************/ case MOP: switch(c) { case ' ': case '\t': state = VALUE; index++; break; case '<': case '>': case '=': case '!': strncat(token->math_op, &c, 1); index++; break; default: /* if (token->math_op == NULL) return NULL; MOP missing */ state = VALUE; } break; /*******************************************/ case VALUE: switch(c) { case ' ': index++; break; case '"': while (((c = string[++index]) != '"') && (c != '\0') && (c != '\n')) strncat(token->value, &c, 1); index++; state = NEUTRAL; tmp = token; token = token->next; break; default: while ((c != ' ') && (c != '\0')&& (c != '\n')&& (c != '"')) { strncat(token->value, &c, 1); c = string[++index]; } state = NEUTRAL; tmp = token; token = token->next; } break; } } return head; }
You can see that this code is not very easy to follow, and not overly descriptive in what it does. Clearly it iterates over the character array switching on a remembered state to build up the SQLToken instance. However it is not apparent if there is a bug in the code, and should this method need to be extended due to a change in the grammar, much rework may be needed.
A small piece of history. The application was started around 12 years ago and was originally all C. Policy is now that new development should be in C++, updating old code where necessary. So to bring the interface more into line with C++ the signature was changed to:
std::vector<SQLToken> query_parse( char const* input)
The input parameter was not changed to a string as that would not really have gained anything. The calling function had the data as a char const*, and that is also the type for the parameter for the parser. Also the SQLToken definition changed to use std::string:
struct SQLToken { std::string logic_op; std::string comp_op; std::string value; };
In order to move the legacy function to Spirit, the grammar had to be defined. By meticulous iteration of the existing function with sample input, the following grammar was extracted.
comp_op ::= '<' | '<=' | '<>' | '>' | '>=' | '=' | '!=' logic_op ::= '+' | '|' value ::= '"' not_quote+ '"' | not_space+ element ::= (logic_op? comp_op? value)+
where not_quote is any character except the quote character ("), and not_space is any character except white space (space, tab, or new line).
Now the documentation of the boost website for Spirit gives a great, easy to follow introduction [_3]. The management summary equivalent goes something like this:
a parser is made up from rules
rules are place holders for expressions
expressions are either primitives or combinations
Spirit provides classes that define rules and parsers. It also provides a fairly complete set of primitives. The main primitives used for this example are spirit::str_p and spirit::ch_p. str_p matches a string, and ch_p matches a single character.
Expressions can be grouped with brackets, alternatives defined by | (bar character), and combined using the >> operator. The bar operator is overloaded in Spirit allowing us to not explicitly wrap alternatives in constructor calls. This is a convenience especially when trying to fit examples in a small text area.
The first two grammar components are quite simple. For now just accept that what is being assigned is some form of rule class and the declaration will come later.
comp_op = spirit::str_p("<>") | "<=" | "<" | ">=" | ">" | "=" | "!="; logic_op = spirit::ch_p('+') | '|';
The quirky parts of this are that the expressions are evaluated in a short circuit manner, so for the comparison operators you need to list the longest first, so <> needs to come before < otherwise the < will be matched for that expression. The Spirit library does provide a way to get around the short circuit nature with a directive. Directives could be thought of as modifiers to an expression. Here use of the longest_d directive would suffice, which would give:
comp_op = spirit::longest_d[ spirit::str_p('<') | '<=' | '<>' | '>' | '>=' | '=' | '!=' ]
However the choice was to go with the simpler definition and a comment.
Now for the value rule. Some of the predefined character parsers were used for this.
ch_p('"') matches the quote character, ~ch_p('"') matches any character except the quote character, and +(~ch_p('"')) matches one or more non-quote characters. So the first part of the value is
'"' >> (+(~spirit::ch_p('"'))) >> '"'
The alternative to a quote enclosed string is a single word, where the contents of the word is anything that isn't whitespace. Spirit provides a space_p that matches whitespace characters, so ~space_p will match non-whitespace characters. To make a word, we use
(+(~spirit::space_p))
Most of the time when parsing, whitespace is ignored, however in this case whitespace matters. This rule as it stands actually matches the string "a b c d" as "abcd". In order to tell the parser that we are concerned about the whitespace, we use the directive lexeme_d. The full rule for value is then:> (+(~spirit::ch_p('"'))) >> '"' | spirit::lexeme_d[(+(~spirit::space_p))];
The element then is an accumulation of the other rules. operator! is used as zero or one, so the element is then
element = +(!logic_op >> !comp_op >> value);
The complete definition for the grammar object is then:
struct query_grammar : public spirit:: grammar<query_grammar> { template <typename ScannerT> struct definition { definition(query_grammar const& self) { // short circuit, so do longer // possibilities first comp_op = spirit::str_p("<>") | "<=" | "<" | ">=" | ">" | "=" | "!="; logic_op = spirit::ch_p('+') | '|';> (+(~spirit::ch_p('"'))) >> '"' | spirit::lexeme_d[ (+(~spirit::space_p)) ]; element = +(!logic_op >> !comp_op >> value); BOOST_SPIRIT_DEBUG_RULE(comp_op); BOOST_SPIRIT_DEBUG_RULE(logic_op); BOOST_SPIRIT_DEBUG_RULE(value); BOOST_SPIRIT_DEBUG_RULE(element); } spirit::rule<ScannerT> comp_op, logic_op, value, element; spirit::rule<ScannerT> const& start() const { return element; } }; };
The BOOST_SPIRIT_DEBUG_RULE macro enables some very useful debugging output which is handy when tracing your grammar if it is going wrong. A quick interactive test program allows us to test the grammar.
int main() { std::cout << "> "; std::string input; std::getline(std::cin, input); query_grammar parser; while (input != "quit") { if (spirit::parse(input.c_str(), parser, spirit::space_p).full) std::cout << "parse succeeded"; else std::cout << "parse failed"; std::cout << "\n> "; std::getline(std::cin, input); } }
Ths was used to prove that the grammar was correct. The next challenge is how to get the parser to populate the vector of SQLToken objects while parsing? I want the SQLToken object to be populated during parsing and, once a complete token has been processed (!logic_op >> !comp_op >> value), it should be pushed on to the vector.
The interesting part of handling assignment is that the definition struct constructor takes a constant reference to the outer grammar structure, so you cannot change normal member variables. This leaves the choices of mutable and references, and personally I tend to shy away from mutable where there is another choice. So the outer grammar stuct holds references to objects that we want to populate.
struct query_grammar : public spirit::grammar<query_grammar> { // definition structure here... query_grammar(std::vector<SQLToken>& tokens, SQLToken& token) : tokens_(tokens), token_(token) {} std::vector<SQLToken>& tokens_; SQLToken& token_; };
The next step is to add the actions to the rules, and this is done through the use of "actors". There are a number of predefined actors. The main one used here is assign_a. The function call operator on this actor takes one or two parameters. The first parameter is a reference to the string object to populate. If the second parameter is passed in, it assigns the second parameter to the first, and if not, the text that is matched for the rule is assigned.
There is the situation where we want to assign "and" when the parser finds '+', and "or" for '|', so the logic_op rule is changed to look like this:
logic_op = spirit::ch_p('+')[spirit::assign_a( self.token_.logic_op, "and")] | spirit::ch_p('|')[spirit::assign_a( self.token_.logic_op, "or")];
Since the action is being used on the components of the rule, the definition now has to specify ch_p('|') instead of just '|', as there is no operator[] on a char.
For the value, if it was quote enclosed, the value is the contents of the string without the quotes, otherwise the value is the whole single word, so the actor is applied to the parts of the value rule, not on the rule as a whole.> (+(~spirit::ch_p('"'))) [spirit::assign_a(self.token_.value)] >> '"' | spirit::lexeme_d[ (+(~spirit::space_p)) [spirit::assign_a(self.token_.value)] ];
The comparison operator can be handled at the whole rule level as the text of the parsed rule is the string value that we want to store for the SQLToken. This is achieved by specifying the action for the comp_op rule in the element.
element = +(!logic_op >> !(comp_op[spirit::assign_a( self.token_.comp_op)]) >> value);
The last part of the parsing is to add the token to the vector. One way of doing this is through a functor object. Standard Spirit functors need to handle two char const* parameters. These are the start and end of the "match" for the rule. In this case they aren't used at all, but instead the functor operates on the references that it is constructed with.
struct push_token { push_token(std::vector<SQLToken>& tokens, SQLToken& token) : tokens_(tokens), token_(token) {} void operator()(char const*, char const*) const { tokens_.push_back(token_); // reset token_ to blanks token_ = SQLToken(); } std::vector<SQLToken>& tokens_; SQLToken& token_; };
To incorporate this functor into our element rule, we specify it as the action and construct it with the same references as the grammar.
element = +(!logic_op >> !(comp_op[spirit::assign_a( self.token_.comp_op)]) >> value)[push_token(self.tokens_, self.token_)];
Now it's done. After testing the results, which to my initial surprise worked perfectly, the old function was replaced with this:
namespace { using namespace boost; struct push_token { push_token(std::vector<SQLToken>& tokens, SQLToken& token) : tokens_(tokens), token_(token) {} void operator()(char const*, char const*) const { tokens_.push_back(token_); // reset token_ to blanks token_ = SQLToken(); } std::vector<SQLToken>& tokens_; SQLToken& token_; }; struct query_grammar : public spirit ::grammar<query_grammar> { template <typename ScannerT> struct definition { definition(query_grammar const& self) { // short circuit, so do longer // possibilities first comp_op = spirit::str_p("<>") | "<=" | "<" | ">=" | ">" | "=" | "!="; // + -> and, | -> or. Could now // easily add in "and" and "or" logic_op = spirit::ch_p('+')[spirit ::assign_a(self.token_.logic_op, "and")] | spirit::ch_p('|')[spirit ::assign_a(self.token_.logic_op, "or")]; // values are single words or // enclosed in quotes.> (+(~spirit::ch_p('"'))) [spirit::assign_a(self.token_.value)] >> '"' | spirit::lexeme_d[ (+(~spirit::space_p)) [spirit::assign_a(self.token_.value)] ]; // EBNF: (logic_op? comp_op? value)+ // parsing fails if there are no values. element = +(!logic_op >> !(comp_op[spirit::assign_a( self.token_.comp_op)]) >> value)[push_token(self.tokens_, self.token_)]; } spirit::rule<ScannerT> comp_op, logic_op, value, element; spirit::rule<ScannerT> const& start() const { return element; } }; query_grammar(std::vector<SQLToken>& tokens, SQLToken& token) : tokens_(tokens), token_(token) {} std::vector<SQLToken>& tokens_; SQLToken& token_; }; std::vector<SQLToken> query_parse( char const* input) { Logger logger("gds.query.engine.parse"); GDS_DEBUG_STREAM(logger) << "query_parse input: " << input; std::vector<SQLToken> tokens; SQLToken token; query_grammar parser(tokens, token); if (spirit::parse(input, parser, spirit::space_p).full) { if (logger.isDebugEnabled()) { for (unsigned i = 0; i < tokens.size(); ++i) GDS_DEBUG_STREAM(logger) << tokens[i]; } } else { GDS_DEBUG(logger, "parse failed"); tokens.clear(); } return tokens; } } // anon namespace
An anonymous namespace is used instead of the old static C function, some logging was added using our logging classes, but apart from that, the code went in without other modifications.
In total, I achieved a reduction of about 40 lines of code, which in itself is completely meaningless. The general complexity of the code increased, but at least in my opinion, it is now more maintainable and extensible. Should the client want to make modifications to the grammar it is now a relatively simple operation compared to the nightmare of altering the original embedded switch statements.
Special thanks to Phil Bass and David Carter-Hitchin for reviewing this article. | https://accu.org/index.php/journals/294 | CC-MAIN-2020-29 | en | refinedweb |
Hello to this little introduction about Custom directives in angular.
What is Directive?
In angular a directive is special kind of component but without any template referencing directly. Meaning A component is directive with template binding out of the box. A directive can be useful for any DOM manipulation in application. In fact, angular is recommending to use custom-directive when you want to safely manipulate DOM.
Types of directive?
- Component directive. any component in angular with @Component decorator is special kind of directive, and we called it as a component directive.
- Attribute directive. Angular provides [ngClass], [ngStyle] which are useful for changing appearance of element.
- Structural directive. Angular provides *ngIf, *ngFor, *ngSwitch are all called as structural directive because of all are used to manipulate DOM structure by adding or removing element directly.
- Custom directive. this is directive we can used in angular for custom DOM logic implementation. we can create custom directive using angular-cli by firing
ng generate directive <directive-name>and custom directive is generated with @Direvtive() decorator in class. By default scope is ngModule level.
Today, we are going to learn how to implement our own *ngIf using custom-directive.
now lets create custom directive by firing this command..
ng generate directive custom-directive-if
Above command will generate directive like this..
import { Directive } from '@angular/core'; @Directive({ selector: '[appCustomDirectiveIf]' }) export class CustomDirectiveIfDirective { constructor() { } }
now lets create add below code to app.component.html
<div class="row p-1"> <div class="col-6 d-flex align-items-center"> <input #checkboxValue <label class="ml-1 cursor" for="checkBox"> show/hide </label> </div> <div * Custom If directive content displayed... </div> </div>
Above code note we are using our own custom implementation of directive to replace *ngIf or understand properly how to manipulate DOM node properly. we are using
*appCustomDirectiveIf and passing reference of
show to it which is coming from checkbox. When user checked checkbox
show becomes true by calling (change) event of input type="checkbox", so we call
onCheckboxChanged() and passes reference of input-checkbox. Then after checkbox value is passed to out custom directive as a @Input().
now implement custom-directive
import { Directive, Input, TemplateRef, ViewContainerRef } from '@angular/core'; @Directive({ selector: '[appCustomDirectiveIf]' }) export class CustomDirectiveIfDirective { @Input() set appCustomDirectiveIf(show: boolean){ show ? this.container.createEmbeddedView(this.templateRef) : this.container.clear(); } constructor(private templateRef: TemplateRef<any>, private container: ViewContainerRef) { } }
We are injecting 1. TemplateRef. TemplateRef is the one we applied our custom directive in template. means the template node reference on which we are applying custom-directive.
- ViewContainerRef. In angular we are not directly manipulate DOM or accessing DOM structure. Because angular is plateform independent meaning same code base you can use in ng-Universal or in IONIC. SO, accessing DOM directly you break code to run in other plateform where DOM is not available. So to safely access DOM structure angular creates their own VIEW hierarchy and based on that DOM is created or removed. To access VIEW hierarchy angular provide
ViewContainerRef, and some methods to add or remove element from view and view directly bounded to DOM so it will update DOM for us automatically.
Now, when we pass true to @Input() view.createEmbeddedView() method is called and it will create new DOM node element in current element hierarchy. and if value is false then we clear out view hierarchy and DOM updates occurs too.
You can find out working code in this link
Posted on May 28 by:
Discussion | https://dev.to/gaurangdhorda/custom-directive-implementation-like-ngif-is-fd6 | CC-MAIN-2020-29 | en | refinedweb |
🎼webpack 4: released today!!✨
Codename: Legato 🎶
Today we’re happy to announce that webpack 4 (Legato) is available today! You can get it via yarn or npm using:
$> yarn add webpack webpack-cli --dev
or
$> npm i webpack webpack-cli --save-dev
🎼 Why Legato?
We wanted to start a new tradition by giving each of our major releases a codename! Therefore, we decided to give this privilege to our largest OpenCollective sponsor: trivago!
So we reached out and here was their response:
[At trivago] we usually give our projects a name with a musical theme. For example, our old JS Framework was called “Harmony”, our new framework is “Melody”. On the PHP side, we use Symfony with a layer on top called “Orchestra”.
Legato means to play each note in sequence without gaps.
Webpack bundles our entire frontend app together, without gaps (JS, CSS & more). So we believe that “legato” is a good fit for webpack — Patrick Gotthardt at trivago Engineering
We were thrilled, because everything we worked on this release encapsulates this idea webpack feeling legato, or without gaps, when you use it. Thank you so much to trivago for this incredible year of sponsorship and for naming webpack 4! 👏👏
🎊 trivago helps secure webpack’s future 🎊
With webpack becoming the tool of choice for many companies across the world, its success and that of the companies…
medium.com
🕵️What’s new?
There are so many new things in webpack 4, that I can’t list them all or this post would last forever. Therefore I’ll share a few things, and to see all of the changes from 3 to 4, please review the release notes & changelog.
🏎 webpack 4, is FAST (up to 98% faster)!
We were seeing interesting reports of build performance from the community testing our beta, so I shot out a poll so we could verify our findings:
The results were startling. Build times decreased from 60 to 98%!! Here are just a few of the responses we’ve seen.
This also gave us the opportunity to identify some key blocking bugs in loaders and plugins that have since now been fixed!! PS: we haven’t implemented Multicore, or Persistent Caching yet (slated for version 5). This means that there is still lots of room for improvement!!!!
Build speed was one of the top priorities that we had this release. One could add all the features in the world, however if they are inaccessible and waste minutes of dev time, what’s the point? This is just a few of the examples we’ve seen so far, but we really look forward to having you try it out and report your build times with #webpack #webpack4 on twitter!
😍 Mode, #0CJS, and sensible defaults
We introduced a new property for your config called
mode. Mode has two options:
development or
production and defaults to
production out of the box. Mode is our way of providing sensible defaults optimized for either build size (production) optimization, or build time (development) optimization.
To see all of the details behind mode, you can check out our previous medium article here:
In addition, entry, output are both defaulted. This means you don’t need a config to get started, and with
mode, you’ll see your configuration file get incredibly small as we are doing most of the heavy lifting for you now!
Legato means to play each note in sequence without gaps.
With all these things, we now have a platform of zero config that we want you to extend. One of webpack’s most valuable feature is that we are deeply rooted in extensibility. Who are we to define what your #0CJS (Zero-Config JS) looks like? When we finish the design and release of our webpack presets design, this means you can extend #0CJS to be unique and perfect for your workflow, company, or even framework community.
✂ Goodbye CommonsChunkPlugin
We have deprecated and removed CommonsChunkPlugin, and have replaced it with a set of defaults and easily overridable API called
optimization.splitChunks. Now out of the box, you will have shared chunks automatically generated for you in a variety of scenarios!
For more information on why we did this, and what the API looks like, see this post!!
webpack 4: Code Splitting, chunk graph and the splitChunks optimization
webpack 4 made some major improvements to the chunk graph and added a new optimiztion for chunk splitting (which is a…
medium.com
🔬WebAssembly Support
Webpack now by default supports
import and
export of any local WebAssembly module. This means that you can also write loaders that allow you to
import Rust, C++, C and other WebAssembly host lang files directly.
🐐 Module Type’s Introduced + .mjs support
Historically JavaScript has been the only first-class module type in webpack. This caused a lot of awkward pains for users where they would not be able to effectively have CSS/HTML Bundles, etc. We have completely abstracted the JavaScript specificity from our code base to allow for this new API. Currently built, we now have 5 module types implemented:
javascript/auto: (The default one in webpack 3) JavaScript module with all module systems enabled: CommonJS, AMD, ESM
javascript/esm: EcmaScript modules, all other module system are not available (the default for .mjs files)
javascript/dynamic: Only CommonJS & AMD; EcmaScript modules are not available
json: JSON data, it’s available via require and import (the default for .json files)
webassembly/experimental: WebAssembly modules (currently experimental and the default for .wasm files)
- In addition webpack now looks for the
.wasm,
.mjs,
.jsand
.jsonextensions in this order to resolve
What’s most exciting about this feature, is that now we can continue to work on our CSS and HTML module types (slated for webpack 4.x to 5). This would allow capabilities like HTML as your entry-point!
🛑 If you use HtmlWebpackPlugin
For this release, we gave the ecosystem a month to upgrade any plugins or loaders to use the new webpack 4 API’s. However, Jan Nicklas has been away with work obligations, and therefore we have provided a patched fork of
html-webpack-plugin . For now you can install it by doing the following:
$> yarn add html-webpack-plugin@webpack-contrib/html-webpack-plugin
When Jan returns from overseas work at the end of the month, we plan to merge our fork upstream into
jantimon/html-webpack-plugin ! Until then, if you have any issues, you can submit them here!
UPDATE (3/1/2018): html-webpack-plugin@3 is now available with v4 support!!!!
If you own other plugins and loaders, you can see our migration guide here:
webpack 4: migration guide for plugins/loaders
This guide targets plugin and loader authors
medium.com
💖And so much more!
There are so many more features that we heavily recommend you check them all out on our official change log.
🐣 Where’s the v4 Docs?
We are very close to having out Migration Guide and v4 Docs Additions complete! To track the progress, or give a helping hand, please stop by our documentation repository, checkout the
next branch, and help out!
🤷 What about <framework>-cli?
Over the past 30 days we have worked closely with each of the frameworks to ensure that they are ready to support webpack 4 in their respective cli’s etc. Even popular library’s like lodash-es, RxJS are supporting the
sideEffects flag, so by using their latest version you will see instant bundle size decreases out of the box.
The AngularCLI team has said that they even plan on shipping their next major version (only ~week away) using webpack 4! If you want to know the status, reach out to them, and ask how you can help [instead of when it will be done].
😒Why do you use so many emojis?
Because we can have fun while creating an incredible product! You should try it sometime 😍.
🎨 Whats next?
We have already started planning our next set of features for webpack 4.x and 5! They include (but are not limited to):
- ESM Module Target
- Persistent Caching
- Move WebAssembly support from
experimentalto
stable. Add tree-shaking and dead code elimination!
- Presets — Extend 0CJS, anything can be Zero Config. The way it should be.
- CSS Module Type — CSS as Entry (Goodbye ExtractTextWebpackPlugin)
- HTML Module Type — HTML as Entry
- URL/File Module Type
- <Create Your Own> Module Type
- Multi-threading
- Redefining our Organization Charter and Mission Statement
- Google Summer of Code (Separate Post Coming Soon!!!) | https://medium.com/webpack/webpack-4-released-today-6cdb994702d4 | CC-MAIN-2020-29 | en | refinedweb |
Scaling code that we’re ready to share.
To recap the talk: at any given point over the last four years, we have had what I’d call a minimum viable caching system. The stages were:
- Stand up a Master-slave Memcached pair.
- Add sharded Redis, each shard a master-slave pair, with loosely Pinstagram-style persistence, consistent hashing based on fully distributed ketama clients, and Zookeeper to notify clients of configuration changes.
- Replace (1) with Wayfair-ketamafied Memcached, with no master-slaves, just ketama failover, also managed by Zookeeper.
- Put Twemproxy in front of the Memcached, with Wayfair-ketamafied Twemproxy hacked into it. The ketama code moves from clients, such as PHP scripts and Python services, to the proxy component. The two systems, one with configuration fully distributed, one proxy-based, maintain interoperability, and a few fully distributed clients remain alive to this day.
- Add Redis configuration improvements, especially 2 simultaneous hash rings for transitional states during cluster expansion.
- Switch all Redis keys to ‘Database 0’
- Put Wayfairized Twemproxy in front of Redis.
- Stand up a second Redis cluster in every data center, with essentially the same configuration as Memcached, where there’s no slave for each shard, and every key can be lazily populated from an interactive (non-batch) source.
The code we had to write was
- Some patches to Richard Jones’s ketama, described in full detail in the previous blog post:.
- Some patches to Twitter’s Twemproxy :, a minor change, making it interoperable with the previous item.
- Revisions to php-pecl-memcached, removing a ‘version’ check
- A Zookeeper script to nanny misbehaving cluster nodes. Here’s a gist to give the idea.
Twemproxy/Nutcracker has had Redis support from early on, but apparently Twitter does not run Twemproxy in front of Redis in production, as Yao Yue of Twitter’s cache team discusses here:
. So we are not necessarily surprised that it didn’t ‘just work’ for us without a slight modification, and the addition of the Zookeeper component.
Along the way, we considered two other solutions for all or part of this problem space: mcRouter and Redis cluster. There’s not much to the mcRouter decision. Facebook released McRouter last summer. Our core use cases were already covered by our evolving composite system, and it seemed like a lot of work to hack Redis support into it, so we didn’t do it. McRouter is an awesome piece of software, and in the abstract it is more full-featured than what we have. But since we’re already down the road of using Redis as a Twitter-style ‘data structures’ server, instead of something more special-purpose like Facebook’s Tao, which is the other thing that mcRouter supports, it felt imprudent to go out on a limb of Redis/mcRouter hacking. The other decision, the one where we decided not to use Redis cluster, was more of a gut-feel thing at the time: we did not want to centralize responsibility for serious matters like shard location with the database. Those databases have a lot to think about already! We’ll certainly continue to keep an eye on that product as it matures.
There’s a sort of footnote to the alternative technologies analysis that’s worth mentioning. We followed the ‘Database 0’ discussion among @antirez and his acolytes with interest. Long story short: numbered databases will continue to exist in Redis, but they are not supported in either Redis cluster or Twemproxy. That looks to us like the consensus of the relevant community. Like many people, we had started using the numbered databases as a quick and dirty set of namespaces quite some time ago, so we thought about hacking *that* into Twemproxy, but decided against it. And then of course we had to move all our data into Database 0, and get our namespace act together, which we did.
Mad props to the loosely confederated cast of characters that I call our distributed systems team. You won’t find them in the org chart at Wayfair, because having a centralized distributed systems team just feels wrong. They lurk in a seemingly random set of software and systems group throughout Wayfair engineering. Special honors to Clayton and Andrii for relentlessly cutting wasteful pieces of code out of components where they didn’t belong, and replacing them with leaner structures in the right subsystem.
Even madder props to the same pair of engineers, for seamless handling of the operational aspects of transitions, as we hit various milestones along this road. Here are some graphs, from milestone game days. In the first one, we start using Twemproxy for data that was already in Database 0. We cut connections to Redis in half:
Then we take another big step down.
Add the two steps, and we’re going from 8K connections, to 219. Sorry for the past, network people, and thanks for your patience! We promise to be good citizens from now on.
[Update: I gave a talk about this at Facebook’s Data@Scale Boston 2014 conference ]
Responses
April 9th, 2015
Great work, thanks for the update Ben!
May 24th, 2019
Great move. I belong to retail industry and i am personally evaluating different caching solution to remove bottlenecks and improve performance. i found NCache as alternative to Redis and now i am confused. Can you please share with me the Redis performance and issues? It will help me in taking a better decision.
Reference: | https://tech.wayfair.com/2015/03/scaling-redis-and-memcached-at-wayfair/ | CC-MAIN-2020-29 | en | refinedweb |
A Temperature Controlled Relay Circuit
To show you how to wire the relay, let’s build a temperature controlled relay circuit that will turn off a light bulb when the temperature of a thermistor reaches 150 °F. Thermistors are really useful with 5V relays. You can use them to turn off a large motor if gets too hot or turn on a heater if the temperature gets too cold.
WARNING – THIS PROJECT INVOLVES HIGH VOLTAGES THAT CAN CAUSE SERIOUS INJURY OR DEATH. PLEASE TAKE ALL NECESSARY PRECAUTIONS, AND TURN OFF ALL POWER TO A CIRCUIT BEFORE WORKING ON IT.
The setup is fairly simple, just make sure that the high voltage connections to the relay are secure:
Identify the hot power wire (red wire in the diagram above) in the cord leading to the light bulb and make a cut. Connect the side leading to the light bulb to the NO terminal of the relay, and the side leading to the plug to the C terminal. This way the relay is on the hot side, and current is switched before it reaches the light bulb. It’s dangerous to put the relay on the neutral wire, since if the device fails current can still fault to ground when the relay is off.
The thermistor part of the circuit is set up as a voltage divider..
If you do use a 100K Ω thermistor, you’ll need to change line 7 in the code below to Temp = log(100000.0*((1024.0/RawADC-1)));. See our article on Making an Arduino Temperature Sensor for more information.
The Code
After everything is connected, upload this code to the Arduino:
#include <math.h> int pinOut = 10; double Thermistor(int RawADC) { double Temp; Temp = log(10000.0*((1024.0/RawADC-1))); Temp = 1 / (0.001129148 + (0.000234125 + (0.0000000876741 * Temp * Temp ))* Temp ); Temp = Temp - 273.15; Temp = (Temp * 9.0)/ 5.0 + 32.0; return Temp; } void setup() { Serial.begin(9600); pinMode(10, OUTPUT); } void loop() { int val; double temp; val=analogRead(0); temp=Thermistor(val); Serial.print("Temperature = "); Serial.print(temp); Serial.println(" F"); if (temp >= 150){ digitalWrite(pinOut, LOW); } else { digitalWrite(pinOut, HIGH); } delay(500); }
In this example, the relay will stay activated and let current flow through the light bulb until the temperature of the thermistor reaches 150 °F. At 150 °F the relay shuts off and the current stops. You can change the temperature in line 27 where it says if (temp >= 150){.
Another really useful project is a relay controlled power outlet box. This will let you plug any appliance into the outlet and control it with your Arduino without cutting into any power cords. You can also control several devices at the same time. See our tutorial “Turn Any Appliance into a Smart Device with an Arduino Controlled Power Outlet” to see how we built it.
Hope this is useful for you! Leave a comment if you have any questions. If you want us to let you know when we publish more tutorials, be sure to subscribe and share it if you know someone that would find it helpful. Thanks!
How to Power Your Arduino With a Battery
June 13, 2020
Sir, How can i display the temperature on LCD and serial monitor at the same time?
Check out our article on LCD displays for the Arduino, it should explain what you need to do that:
Where did you purchase this relay? I can’t seem to find it. I’ve found similar ones, but not this one.
I got it from Amazon, here’s a link to it
that relay really should be in a box. Mains nips are painful & dangerous.
That’s very true! You might want to check out this other article I wrote about how to make a relay controlled power outlet box:
Hi there awesome video!! I was wondering how i can do this project with a light fixture that just has a hot wire and a neutral wire?
Mario:
If you see the diagram, the black wire between the plug and the light fixture is the neutral wire.
So in this case, like in yours, the light fixture just has a hot wire and a neutral wire.
So this project is EXACTLY what you are looking for.
:-)
The relay is UR (UL Recognized) but I would love to see the bottom of the PCB to verify proper creepage distances.
Thanks for sharing! You’re now featured on
@iotcentrum
What is the value of Resistor and capacitor which is connected in bread board, It must required?
The black thing that looks like a ceramic disc capacitor is actually the thermistor. This is a voltage divider circuit, so the value of the resistor should be of the same magnitude as the resistance of your thermistor. For example, if you have a 100K Ohm thermistor, the resistor should be 100K Ohms also. Thanks for bringing that up, I will update the post…
Very nice, clear explanation.
Thank you.
How would I attach more than 1 relay (say 3) to control several loads individually?
is this sketch also for degrees C, instead of F??
It’s for Fahrenheit, but comment out line 10 to get Celsius…
Not today, but really soon! Thanks…
Thumbs up for sharing that tweet. It’s now live on my @RebelMouse!
Love to see a tutorial using a MOSFET in place of the relay for voltages higher than 5V.
Would it be right to assume that if this approach is used on a 240V circuit that it can only interrupt one of the two phases, and that as a result the second phase will remain “hot” (and potentially lethal) at the load, even when it’s “off”?
If so, the modification to use the same idea with a 240V circuit would be to use a DPDT (double-pole/double-throw) relay instead: the poles being used for the two 240V hots and the throws being normally-open (NO) and normally-close (NC).
At least that’s what I’m setting out to try.
Thank you for the most excellent pictures.
And your photos were very helpful too. ;^)
Possibly like this Fujitsu FTR-F1.
can you explain more about the thermistor and the program written for it in the sketch?
Check out our other article on thermistors and the Arduino, I go a lot more in depth into the program and set up of the thermistor:
will this still work even when i used the thermistor which is above 100k for recording high temperatures of around 200 celcius and 300 celcius for a 3d printer head
and what is power rating of a 10k resistor
Yes it will work for any size thermistor… You just want the resistor in the voltage divider be around the same resistance as your thermistor. Then you have to change line 7 in the code to the resistance value. So if you use a 200K thermistor, line 7 would be:
Temp = log(200000.0*((1024.0/RawADC-1)));
You say, just under the image, to “Connect the power wire of the light bulb cord to the NO (normally open) terminal of the relay, and connect the neutral power wire of the cord to the C (common) terminal of the relay.”
The image show the power wire is cut and the cut ends are then connected to the relay common and NO intact. Connecting power to neutral should blow a fuse when the relay closes the contacts!
Cheers,
Norm.
Sorry, can’t spell.
Contact, not intact. I hate auto correct sometimes!
Cheers,
Norm.
Sorry, that was an error… Thanks for commenting about it, I just changed the post.
Great tutorial! If you want to save a few bucks and just buy the relay itself, is a great tutorial on how to properly size your components around the relay.
Nice, thanks
And that capacitor between the vcc and gnd is for? decoupling?
I think the capacitor you’re talking about is actually a thermistor.
How about using dust sensor ?
I just used a thermistor because it’s a simple example, but any sensor that can be connected to the Arduino will work. It’s just a matter of writing the code that will take the input from the sensor and using it to produce a HIGH signal at one of the Arduino’s pins.
Can I use this relay to just make a connection without transferring power, like a makeshift momentary switch?
I need to control the relay using a serial monitor commands, is that possible? And how to do it , Thanks
Yes you can
Sir, if i’m using with DHT11and i want to use it for humidity, what should i change in the coding?
I’ve built the relay into a power outlet box, but the code takes the humidity reading from a DHT11 and uses it to control the relay.
any pointers on controlling power to ac devices? So say varying heat output of a hairdryer for example?
Hi I was just wondering if this relay would also work for outputting lower voltages, I am trying to use a push button to activate a relay from my 5v pin on the arduino uno with the output attached to a 6v power wheels motor ( the current is too high for me to want that involved with anything on my arduino). I saw the voltage rating for this is 120v, but was wondering if it will work for such a lesser voltage
Relay is an electromechanical switch and as such “doesn’t care” about the voltage going through – use it freely.
With high powers /high voltages it is useful because it totally separates both circuits (safety concerns). For lower powers /lower voltages you can also use a FET (“sort of” transistor) as a switch – it is much more simple. Switching a FET / transistor with arduino “out” pins is well documented.
*For the sake of accuracy: you can also switch bigger loads with transistors / FETs … Relay is a kind of obsolete tech really.
Hi
I have a question about the Songle relays. The spec sheet says the current draw is about 85mA. Can you tell me the current draw of the input pin only. I presume most of the current comes from Vcc.
Some basic remarks:
1) Relays: in DC-control-circuits should have a flyback diode connected in parallel to the coil of the relay (eg 1N4001 – but check the relay data sheet for choosing the right diode). Since an inductor (the relay coil) cannot change it’s current instantly, the flyback diode provides a path for the current when the coil is switched off. Otherwise a large voltage spike will occur causing arcing on the switch contacts which will likely destroy the switching transistor, etc..
3) Powering the Arduino by the means of the USB-connection (which provides appr 100mA) may be not enough to switch relays. Use a separate wall-plug with sufficient power (>500mA, better more).
4) If the relay should switch AC-loads, the use of an opt-isolator is recommended to separate the control-circuit from the switching-circuit (relay). Also the electrical paths for the AC-lines on the relay-module should be wide enough (min 3mm), as short as possible, separated from any other path on the PCB and not too close to each other.
This is useful to me.. this content have a useful information about basics of relays.
Mohammad Abdulhay this might be useful
Sir,
I want to create continuous fast pulses sent to the relay. Please advise on the code to use, and the circuit.
relays are electromechanical devices and as such VERY UNSUITABLE for “continuous fast pulses”. Use “solid state relay” assembly or transistor+optocoupler instead.
How did you draw the relay in Fritzing software? Please help me with this.
Hi. How cn i operate a complete home with adruino and relays… Imagine i have more than 50 Switches to operate in a complete home. Do i have to keep 50 Relays?
Would be grateful for a schematic of the circuit.
This post is inaccurate. Is misleading. Leads to overloading Arduino. Is dangerous.
– A relay is not the same thing as the “Relay module” you are using here. I came to this page looking how to hook up the relay (just the component, so this post doesn’t help me much). No relay (as a component) has a “signal” pin on the coil side, just two pins (which then need additional circuitry for load and spikes regulation). This is a big disinformation – try to change this.
– I have trouble understanding why did you have to complicate / and obfuscate this post with temp sensing – if the title is about using the relay (or relay module), make a decent post about it … i have spent 5 minute looking at the fritzing pic wandering what is the point of thermistor (it could be inrush current regulation, but then it wouldn’t be in circuit this way) – then I realized it is a part of some other bussiness
– it is a “circuit basics” page. I strongly advise not to post high voltage designs to such a page – look at the comments and questions in them if you think I am overreacting. Make the relay switch a high current / low voltage load if you want to make an instructable.
great! I’m making a circuit for an ice maker, but I have trouble finding the thermistor data sheet, and I saw that in your code you have the coefficient of the thermistor that you use, I want to know how I can get the data I have I just know it’s 10k and nothing else.
or with the formula of your code can I apply it to this thermistor?
I would appreciate your help,
This is the first time I have messed with a relay so I guess there are conventions involved; however, the fact the “C” stands for “common” seems dangerous. In household AC the neutral wire is also sometimes referred to as “common”. But in this circuit the hot wire is connected to “common”. Is this just a convention when dealing with relay’s I am not aware of?
I want to do something similar with a DHT 11, but i have a problem with this. If the temperature reaches the vale, and the lamp turns off, the temperature will also go lower, and that is going to make the lamp go on and off in aloop. How do I introduce a range, so it gets more stable?
Dears, i have a doubt.
You said that we have to identify the power wire! Suppose the three hole of my wall socket are: A – the hot power wire, T- the ground and B – the neutral wire.
I named the three pin of the plug A1, T1 and B1 ad connected A1 to C on the relay and B1 to the lamp.
If i insert the pin A1 to A and B1 to B in the wall socket the condition of your diagram is satisfied but i can insert the plug two wais. If i insert A1 to B and B1 to A then power wire goes direc to the lamp and the neutral to the relay.
This, probably, is a problem. How can it be solved?
Thanks in advance.
Can we control 12V DC circuit using the same 5V relay
Hello sir, I want to use the 5v relay but controlling it via the computer keyboard chip. when capslock is on the is 3v across it, so i wanted to use this voltage to control my 5v relay.
Schematic or references will be helpful….THANK YOU
Hi, on fritzing I can’t find the relay image like the one used by you, how did you insert it? where do you find it?’m trying to connect ldr with automatic light intensity change and I used the online code to interface and controll but the ldr was automatically on and off without intensity change and in dark time the led goes to off state and also dark in the automatically light goes to off reduce this please provide the correct solution to solve this problem thank you
Hi its very useful to us i tried to control the light using arduino uno with two channel relay board for automatic light ON & OFF system using LDR, i change my option this is very useful application for home applications Thanks a lot…
Hi,
I’m watching your videos on using Arduino and 5v relays…
I’m trying to figure out some simple code to get a temp reading from DarkSky and if the temp is below, 30, turn on the relay.
And when it goes back above 30, to turn the relay back off.
But, I’m a novice and can’t figure it out.
Can you help or suggest where to go for help?
I appreciate you explaining to make sure to secure the voltage connections and the relay. when setting it up. My son has a project for school, and he wants to make an electric car. I will be sure to get him the relays he needs and other parts to complete his project.
hi , is it possible to drive mosfet switch (h-bridge) using relay as an interface between arduino and h-bridge?
Sir, how did you get this specific relay into the Fritzing software | https://www.circuitbasics.com/setting-up-a-5v-relay-on-the-arduino | CC-MAIN-2020-29 | en | refinedweb |
From: Gennaro Prota (gennaro_prota_at_[hidden])
Date: 2002-08-22 15:50:19
On Thu, 22 Aug 2002 21:40:00 +0200, Alberto Barbati
<abarbati_at_[hidden]> wrote:
>In fact, one thing that striked me at first when studying lexical_cast is
>that with the current implementation, the expression:
>
>boost::lexical_cast<std::string>(std::string("hello, world"))
>
>not only does not return the string "hello, world" but throws an
>exception because of the space. It's very counter-intuitive, IMHO.
>
This is not true, because in that case lexical_cast uses
direct_cast_base.
BTW, you said that your question arose on the STLport forum. From what
I've seen that forum seems quite extinguished however: a few time ago
we reported a bug (actually it's the same bug repeated in more that
one function) that concerns extraction of integers from streams.
Nobody replied. Should we mail Boris directly?
For your information the problem is in the implementation of the
functions __get_integer in the file _num_get.c. To reproduce it try
this:
---------------------
#include <sstream>
#include <iomanip>
#include <iostream>
// The value of str must be the
// string representation of 1+LONG_MAX.
// Change it for your machine if needed.
//
int main(int argc, char* argv)
{
using namespace std;
long num;
stringstream str;
str << "2147483648"; // 1+LONG_MAX
str >> num;
cout << "fail()? " << boolalpha << (str.fail()) << '\n';
return 0;
}
----------------------------
If you use 2+LONG_MAX instead then the stream is put in a failed
state.
Genny.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/08/34489.php | CC-MAIN-2020-29 | en | refinedweb |
The code is below. The program runs a series of calculations based on data input by the user. My problem is that for the most important thing I'm looking for, total kg CO2 emissions, I continually get an answer of 0.0. What I need is a sum of the individual total emissions as calculated in each method, i.e. the values which are printed with the following: System.out.println(trans); System.out.println(elec); and System.out.println(food);
The total should be something like 25040 or whatever, depending on the value of the inputs provided by the user, but I'm constantly getting a total of 0.0., which is obviously false. Could have something to do with the way I've initialized my variables, or something to do with the limitations of returning values from methods. I just don't know what to do. How should I tackle this? All help greatly appreciated!
import java.util.Scanner; public class CarbonCalc { public static void main(String[] args) { double trans = 0; double elec = 0; double food = 0; giveIntro(); determineTransportationEmission(null); determineElecticityEmission(null); determineFoodEmission(null); calculateTotalEmission(trans, elec, food); //printReport(trans, elec, food); } //Gives a brief introduction to the user. public static void giveIntro() { System.out.println("This program will estimate your carbon footprint"); System.out.println("(in metric tons per year) by asking you"); System.out.println("to input relevant household data."); System.out.println(""); } //Determines the user's transportation-related carbon emissions. public static double determineTransportationEmission(Scanner input) { Scanner console = new Scanner(System.in); System.out.println("We will first begin with your transportation-related carbon expenditures..."); System.out.print("How many kilometres do you drive per day? "); double kmPerDay = console.nextDouble(); System.out.print("What is your car's fuel efficiency (in km/litre)? "); double fuelEfficiency = console.nextDouble(); System.out.println("We now know that the numeber of litres you use per year is..."); double litresUsedPerYear = 365.00 * (kmPerDay / fuelEfficiency); System.out.println(litresUsedPerYear); System.out.println("...and the kg of transportation-related CO2 you emit must be..."); //Final calculation of transportation-related kgCO2 emissions. double trans = 2.3 * litresUsedPerYear; System.out.println(trans); System.out.println(""); return trans; } //Determines the user's electricity-related carbon emissions. public static double determineElecticityEmission(Scanner input) { Scanner console = new Scanner(System.in); System.out.println("We will now move on to your electricity-related carbon expenditures..."); System.out.print("What is your monthly kilowatt usage (kWh/mo)? "); double kWhPerMonth = console.nextDouble(); System.out.print("How many people live in your home? "); double numPeopleInHome = console.nextDouble(); System.out.println("The kg of electricity-related CO2 you emit must be..."); //Final calculation of electricity-related kgCO2 emissions. double elec = (kWhPerMonth * 12 * 0.257) / numPeopleInHome; System.out.println(elec); System.out.println(""); return elec; } //Determines the user's food-related carbon emissions. public static double determineFoodEmission(Scanner input) { Scanner console = new Scanner(System.in); System.out.println("We will now move on to your food-related carbon expenditures..."); System.out.print("In a given year, what percentage of your diet is meat? "); double meat = console.nextDouble(); System.out.print("In a given year, what percentage of your diet is dairy? "); double dairy = console.nextDouble(); System.out.print("In a given year, what percentage of your diet is fruits and veggies? "); double fruitVeg = console.nextDouble(); System.out.print("In a given year, what percentage of your diet is carbohydrates? "); double carbs = console.nextDouble(); //Final calculation of food-related kgCO2 emissions. System.out.println("The kg of food-related CO2 you emit must be..."); double food = (meat * 53.1 + dairy * 13.8 + fruitVeg * 7.6 + carbs * 3.1); System.out.println(food); System.out.println(""); return food; } //Calculates total emissions across all sources. public static double calculateTotalEmission(double trans, double elec, double food) { System.out.println("Your total kg of CO2 emitted across all sources is equal to..."); double total = trans + elec + food; System.out.println(total); System.out.println(""); return total; } } | https://www.javaprogrammingforums.com/whats-wrong-my-code/39787-hw-help-returning-values-methods.html | CC-MAIN-2020-29 | en | refinedweb |
Enable injection with the environment variable
DD_LOGS_INJECTION=true when using
ddtrace-run.
If you have configured your tracer with
DD_ENV,
DD_SERVICE, and
DD_VERSION, then
env,
service, and
version will also be added automatically. Learn more about unified service tagging.
Note: The standard library
logging is supported for auto-injection. Any libraries, such as
json_log_formatter, that extend the standard library module are also supported for auto-injection.
ddtrace-run calls
logging.basicConfig before executing your application. If the root logger has a handler configured, your application must modify the root logger and handler directly.
If you prefer to manually correlate your traces with your logs, patch your
logging module by updating your log formatter to include the
dd.trace_id and
dd.span_id attributes from the log record.
Similarly, include
dd.env,
dd.service, and
dd.version as attributes for your log record.
The configuration below is used by the automatic injection method and is supported by default in the Python Log Integration:
from ddtrace import patch_all; patch_all(logging=True) import logging from ddtrace import tracer FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] ' '[dd.service=%(dd.service)s dd.env=%(dd.env)s dd.version=%(dd.version)s dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] ' '- %(message)s') logging.basicConfig(format=FORMAT) log = logging.getLogger(__name__) log.level = logging.INFO @tracer.wrap() def hello(): log.info('Hello, World!') hello()
If you are not using the standard library
logging module, you can use the
ddtrace.helpers.get_correlation_ids() to inject tracer information into your logs.
As an illustration of this approach, the following example defines a function as a processor in
structlog to add tracer fields to the log output:
import ddtrace from ddtrace.helpers import get_correlation_ids import structlog def tracer_injection(logger, log_method, event_dict): # get correlation ids from current tracer context trace_id, span_id = get_correlation_ids() # add ids to structlog event dictionary event_dict['dd.trace_id'] = trace_id or 0 event_dict['dd.span_id'] = span_id or 0 # add the env, service, and version configured for the tracer event_dict['dd.env'] = ddtrace.config.env or "" event_dict['dd.service'] = ddtrace.config.service or "" event_dict['dd.version'] = ddtrace.config.version or "" return event_dict structlog.configure( processors=[ tracer_injection, structlog.processors.JSONRenderer() ] ) log = structlog.get_logger()
Once the logger is configured, executing a traced function that logs an event yields the injected tracer information:
>>> traced_func() {"event": "In tracer context", "dd": {"trace_id": 9982398928418628468, "span_id": 10130028953923355146, "env": "dev", "service": "hello", "version": "abc123"}}
Note: If you are not using a Datadog Log Integration to parse your logs, custom log parsing rules need to ensure that
dd.trace_id and
dd.span_id are being parsed as strings. More information can be found in the FAQ on this topic.
See the Python logging documentation to ensure that the Python Log Integration is properly configured so that your Python logs are automatically parsed. | https://docs.datadoghq.com/ja/tracing/connect_logs_and_traces/python/ | CC-MAIN-2020-29 | en | refinedweb |
Source: Deep Learning on Medium
MNIST digits classification with Deep Learning using Python and Numpy
In this era of deep learning frameworks, many who start out with the subject often just watch a video or attend a class to learn the theory about Neural Networks and algorithms that are associated with it and then straightaway code their way into DL Frameworks such as Tensorflow, Pytorch, Keras and many more. But in order to understand the nuts and bolts of how they all work, it is very important to go down the inner workings and build our intuition from there. Here I attempt to do the same with the classical problem of machine learning, the MNIST dataset of handwritten digits where only with the theoretical knowledge of the functioning of Neural Networks, some algorithms, Python and Numpy we could put together quite a decent Deep Neural Network.
Basic knowledge of the concept of Neural Networks and Deep Learning is assumed to be known.
The basic architecture of our NN (Neural Network) will be that, it’d contain a total of 4 layers (barring the input layer) out of which, 3 of them are hidden layers of 50 neurons each and the output layer containing 10 neurons. We use One Hot Encoding for the labels in both the train and test set for classification of digits from 0 to 9 i.e 10 classes.
- Importing the libraries and the dataset. Here we also import ‘to_categorical’ from Keras for One Hot Encoding.
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from keras.utils import to_categorical
2. Load the dataset by splitting it into (X_train_orig, Y_train_orig), (X_test_orig, Y_test_orig). We have used ‘orig’ because later we need to process the data for which we’ll then later assign the simpler (X_train, Y_train), (X_test, Y_test).
(X_train_orig, Y_train_orig), (X_test_orig, Y_test_orig) = mnist.load_data()
3. Processing the data for Y labels. We first reshape the original Y_train and Y_test into (60000, 1) and (10000, 1) respectively, because by default the Tensorflow dataset has this weird shape of (60000,) and (10000,). Then we use ‘to_categorical’ function to One Hot Encode them to 10 classes by passing the argument ‘num_classes=10’. Then the data is transposed to give us Y_train and Y_test.
Y_tr_resh = Y_train_orig.reshape(60000, 1)
Y_te_resh = Y_test_orig.reshape(10000, 1)
Y_tr_T = to_categorical(Y_tr_resh, num_classes=10)
Y_te_T = to_categorical(Y_te_resh, num_classes=10)
Y_train = Y_tr_T.T
Y_test = Y_te_T.T
4. Processing the data for X pixel intensity input values. Originally the X train and test set have this format for their shape which is (60000, 28, 28) and (10000, 28, 28) wherein each image is 28 by 28 pixels in size. To apply the concept of Deep Learning here, we need to ROLL out that 28 by 28 pixels square into a horizontal line containing the same pixels, i.e in total 28*28 = 784 pixels. This process is known as FLATTENING the image. Later we divide both the train and test set by 255. to normalize the pixel intensity values between 0 and 1 for further calculations to give us X_train and X_test.
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
X_train = X_train_flatten / 255.
X_test = X_test_flatten / 255.
5. Define the Activation Functions. Here we’ll use Relu for the hidden layer activations and Softmax for the output layer activations.
def relu(p):
return np.maximum(0, p)def softmax(u):
return np.exp(u) / np.sum(np.exp(u), axis=0, keepdims=True)
6. Initialise the Weights and Biases for each of the layers. We use a ‘for loop’ to do that and simplify our work and feed in the array with the total number of units in each layer (including the input layer). We initialize the weights randomly and initialize the biases to zeros.
parameters = {}
def initialize_parameters(layer_dims):
L = len(layer_dims)
for l in range(1, L):
parameters["W" + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * (np.sqrt(2 / layer_dims[l - 1]))
parameters["b" + str(l)] = np.zeros((layer_dims[l], 1))
return parameters
To make sure all your matrix dimensions match, print them all out (later comment them out).
for l in range(1, 5):
print("W" + str(l) + " = " + str(parameters["W" + str(l)]))
print("W" + str(l) + "shape" + " = " + str(parameters["W" + str(l)].shape))
print("b" + str(l) + " = " + str(parameters["b" + str(l)]))
print("b" + str(l) + "shape" + " = " + str(parameters["b" + str(l)].shape))
7. Forward Propagation
outputs = {}
activation = {}
def forward_prop(parameters, X_train, activation):
m = X_train.shape[1]
outputs["Z" + str(1)] = np.dot(parameters["W1"], X_train) + parameters["b1"]
activation["A" + str(1)] = relu(outputs["Z" + str(1)])
for l in range(2, 4):
outputs["Z" + str(l)] = np.dot(parameters["W" + str(l)], activation["A" + str(l - 1)]) + parameters["b" + str(l)]
activation["A" + str(l)] = relu(outputs["Z" + str(l)])
outputs["Z4"] = np.dot(parameters["W4"], activation["A3"]) + parameters["b4"]
activation["A4"] = softmax(outputs["Z4"])
return outputs, activation
8. Compute the Cost using the cross entropy loss to compute their average to get the Cost.
def compute_cost(activation):
loss = - np.sum((Y_train * np.log(activation["A4"])), axis=0, keepdims=True)
cost = np.sum(loss, axis=1) / m
return cost
9. Define a function to calculate the derivative of the Relu activation function
def drelu(x):
x[x <= 0] = 0
x[x > 0] = 1
return x
10. Compute the gradients of the Loss w.r.t the Weights and the Biases
grad_reg = {}
m = X_train.shape[1]
def grad_re(parameters, outputs, activation):
grad_reg["dZ4"] = (activation["A4"] - Y_train) / m
for l in range(1, 4):
grad_reg["dA" + str(4 - l)] = np.dot(parameters["W" + str(4 - l + 1)].T, grad_reg["dZ" + str(4 - l + 1)])
grad_reg["dZ" + str(4 - l)] = grad_reg["dA" + str(4 - l)] * drelu(outputs["Z" + str(4 - l)])
grad_reg["dW1"] = np.dot(grad_reg["dZ1"], X_train.T)
grad_reg["db1"] = np.sum(grad_reg["dZ1"], axis=1, keepdims=True)
for l in range(2, 5):
grad_reg["dW" + str(l)] = np.dot(grad_reg["dZ" + str(l)], activation["A" + str(l - 1)].T)
grad_reg["db" + str(l)] = np.sum(grad_reg["dZ" + str(l)], axis=1, keepdims=True)
return parameters, outputs, activation, grad_reg
11. Update the parameters using gradient descent algorithm in which you minimize the Cost function w.r.t the parameters.
def learning(grad_reg, learning_rate=0.005):
for i in range(1, 5):
parameters["W" + str(i)] = parameters["W" + str(i)] - (learning_rate * grad_reg["dW" + str(i)])
parameters["b" + str(i)] = parameters["b" + str(i)] - (learning_rate * grad_reg["db" + str(i)])
return parameters
12. The Model. Put together all these above functions into another function which is the model we’ll be executing for the final computations.
num_iterations = 1000
print_cost = True
costs = []
def grad_descent(num_iterations, costs, activation):
initialize_parameters([X_train.shape[0], 50, 50, 50, 10])
for l in range(0, num_iterations):
forward_prop(parameters, X_train, activation)
cost = compute_cost(activation)
grad_re(parameters, outputs, activation)
learning(grad_reg, learning_rate=0.005)
if l % 100 == 0:
costs.append(cost)
if print_cost and l % 100 == 0:
print("Cost after iteration %i: %f" % (l, cost))
return costs, parameters
13. Execution. Call the function ‘grad_descent’ to start the computations of jiggling the parameters.
grad_descent(num_iterations, costs, activation)
14. Plotting. Visualise the performance of your model using MatplotLib
plt.plot(costs)
plt.xlabel('iterations')
plt.ylabel('cost')
plt.show()
For my execution of the model with learning_rate = 0.005 and num_iterations = 1000, I obtained the following plot of the Cost vs Number of Iterations curve which proves that the model is healthy because the cost function is decreasing as the num_iterations increases.
I’m a student too and I have yet to learn to evaluate the model for accuracy, precision and recall for multi-label classification using Confusion Matrix after which I’ll be updating the code.
Also I’ll be publishing my work on Regularization and Optimization for better performance of NN on MEDIUM after I’m done with it which also will be done with Python and Numpy only.
Kudos on reaching the end! I’d provided the code in chunks so that it could be implemented in Jupyter Notebooks easily. I’ll also provide the final code I used here.
GitHub Link —
LinkedIn: Pranav Sastry GitHub: pranavsastry | https://mc.ai/mnist-digits-classification-with-deep-learning-using-python-and-numpy/ | CC-MAIN-2020-29 | en | refinedweb |
The data access module provides a basic framework for accessing data. More...
The data access module provides a basic framework for accessing data.
It is designed towards data interoperability. You can write your own driver for accessing a different data source. Of course, we provide some basic data sources drivers..
The main classes/concepts in this module are listed here. The namespace associated to the Data Access module is te::da. To know more about it, see the te::da namespace documentation. | http://www.dpi.inpe.br/terralib5/codedocs_5.4.0/d8/d18/group__dataaccess.html | CC-MAIN-2020-29 | en | refinedweb |
Other Aliassd_bus_request_name
SYNOPSIS
#include <systemd/sd-bus.h>
- int sd_bus_request_name(sd_bus *bus, const char *name, uint64_t flags);
- int sd_bus_release_name(sd_bus *bus, const char *name);
DESCRIPTION
sd_bus_request_name().
NOTES
The sd_bus_acquire_name() and sd_bus_release_name() interfaces are available as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file. | http://manpages.org/sd_bus_release_name/3 | CC-MAIN-2020-29 | en | refinedweb |
How matrix and vice versa.
- Use AngleAxis function to create rotation matrix in a single line.
Example Implementation
To use the library, the following includes are recommended:
#include <Eigen/Geometry> #include <Eigen/Dense> #include <eigen_conversions/eigen_msg.h> #include <Eigen/Core>
For instance, a rotation matrix homogeneous transform of PI/2 about z-axis can be written as:
Eigen::Affine3d T_rt(Eigen::AngleAxisd(M_PI/2.0, Eigen::Vector3d::UnitZ()));
Additionally, you can:
- Extract rotation matrix from Affine matrix using
Eigen::Affine3d Mat.rotation( )
- Extract translation vector from Affine Matrix using
Eigen::Affine3d Mat.translation( )
- Find inverse and transpose of a matrix using
Mat.inverse( ) and Mat.transpose( )
The applications are the following
- Convert Pose to Quaternions and vice versa
- Find the relative pose transformations by just using simple 3D homogeneous transformation
Eigen::Affine3d Tis a 4*4 homogeneous transform:
- Now all the transformations (rotation or translation) can be represented in homogeneous form as simple 4*4 matrix multiplications.
- Suppose you have a pose transform T of robot in the world and you want to find robot’s X-direction relative to the world. You can do this by using
Eigen::Vector3d x_bearing= T.rotation * Eigen::Vector3d::UnitX();
References
This is an important library in c++ which gives capabilities equal to Python for vectors and matrices. More helpful functions and examples can be found at the following links
- Eigen Documentation:
- Eigen Quaternion Documentation:
- Eigen Transforms Documentation: | https://roboticsknowledgebase.com/wiki/programming/eigen-library/ | CC-MAIN-2020-29 | en | refinedweb |
For instance, in the following code:
class Animal
class Dog extends Animal
trait Base {
def a: Animal = new Dog
}
trait Deri extends Base {
override val a: Dog
}
error: overriding value a in trait Deri of type Dog; method a in
trait Base of type => Animal needs to be a stable, immutable value;
(Note that value a in trait Deri of type Dog is abstract, and is
therefore overridden by concrete method a in trait Base of type =>
Animal)
a
Deri
override
a
Deri
Base
According to Scala Spec, a concrete definition always overrides an abstract definition.
This definition also determines the overriding relationships between matching members of a class C and its parents. First, a concrete definition always overrides an abstract definition. Second, for definitions M and M' which are both concrete or both abstract, M overrides M′ if M appears in a class that precedes (in the linearization of C) the class in which M′ is defined.
So, to make it compiled, you have to make sure the abstract method is overridable by the concrete one. Change Deri:
trait Deri extends Base { override def a:Animal }
or change Base
trait Base { val a: Dog = new Dog } | https://codedump.io/share/6l9QuOxL9Snb/1/why-can39t-concrete-members-be-overridden-with-abstract-ones-in-scala | CC-MAIN-2017-51 | en | refinedweb |
I need help understanding some of the outputs of the code below. (This is just a sample question for a midterm, not homework).
#include <stdio.h>
void figure_me_out(int* a, int b, int c, int* d);
int main(void) {
int var1 = 1, var2 = 10, var3 = 15, var4 = 20;
figure_me_out(&var1, var2, var3, &var4);
printf("%d, %d, %d, %d\n", var1, var2, var3, var4);
return 0;
}
void figure_me_out(int* a, int b, int c, int* d) {
c = b;
b = *d;
*a = 222;
*d = 100;
a = d;
*a = c;
}
222, 10, 15, 10
Its simple I think, Lets go step by step:
void figure_me_out(int* a, int b, int c, int* d) { c = b; // c = 10 b = *d; // b = 20 *a = 222; // *a = 222 : Value at address a is changed to 222 *d = 100; // *d = 100 : Value at address d is changed to 100 a = d; // a = d: Change address of local pointer variable a to d. *a = c; // Changing value of address a which is same as address d to 10 }
In step 3 you have changed original value at address a which you have passed from main function. In step 5 you are changing passed address from main to local variable a in function. After doing a = d, local variable having address is changed to address of d. Now anything you do with this address will get effected at address location of d In step 6 you have changed the value of d to 10.
So final answer coming is 222, 10, 15, 10 | https://codedump.io/share/YjzLfbuNR5iW/1/determining-the-output-with-pointers | CC-MAIN-2017-51 | en | refinedweb |
java.lang.Object
org.netlib.lapack.SGEESXorg.netlib.lapack.SGEESX
public class SGEESX
SGEESX is a simplified interface to the JLAPACK routine sge two REAL (input) CHARACTER*1 * Determines which reciprocal condition numbers are computed. * = 'N': None are computed; * = 'E': Computed for average of selected eigenvalues only; * = 'V': Computed for selected right invariant subspace only; * = 'B': Computed for both. * If SENSE = 'E', 'V' or 'B', SORT must equal 'S'. * * N (input) INTEGER * The order of the matrix A. N >= 0. * * A (input/output) REAL array, dimension (LDA, N) * On entry, the N-by-N matrix A. * On exit, A is) REAL (output) REAL array, dimension (LDVS,N) * If JOBVS = 'V', VS contains the orthogonal matrix Z of Schur * vectors. * If JOBVS = 'N', VS is not referenced. * * LDVS (input) INTEGER * The leading dimension of the array VS. LDVS >= 1, and if * JOBVS = 'V', LDVS >= N. * * RCONDE (output) REAL * If SENSE = 'E' or 'B', RCONDE contains the reciprocal * condition number for the average of the selected eigenvalues. * Not referenced if SENSE = 'N' or 'V'. * * RCONDV (output) REAL * If SENSE = 'V' or 'B', RCONDV contains the reciprocal * condition number for the selected right invariant subspace. * Not referenced if SENSE = 'N' or 'E'. * * WORK (workspace/output) REAL array, dimension (LWORK) * On exit, if INFO = 0, WORK(1) returns the optimal LWORK. * * LWORK (input). * For good performance, LWORK must generally be larger. * * IWORK (workspace/output) INTEGER array, dimension (LIWORK) * Not referenced if SENSE = 'N' or 'E'. * On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK. * * LIWORK (input) INTEGER * The dimension of the array IWORK. * LIWORK >= 1; if SENSE = 'V' or 'B', LIWORK >= SDIM*(N-SDIM). * * WR and WI * contain those eigenvalues which have converged; if * JOBVS = 'V', VS contains the transformation which * reduces A to its partially converged Schur form. * = N+1: the eigenvalues could not be reordered because some * eigenvalues. * * ===================================================================== * * .. Parameters ..
public SGEESX()
public static void SGEESX(java.lang.String jobvs, java.lang.String sort, java.lang.Object select, java.lang.String sense, int n, float[][] a, intW sdim, float[] wr, float[] wi, float[][] vs, floatW rconde, floatW rcondv, float[] work, int lwork, int[] iwork, int liwork, boolean[] bwork, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SGEESX.html | CC-MAIN-2017-51 | en | refinedweb |
C has a very weak form of data encapsulation that is provided via the generic void * pointer and the ability to declare that a struct is local to a file. Suppose I want to declare a Stack data type in C and I want to hide its implementation, including its data structures, from users. I can do this by first defining a public file called Stack.h that contains my generic Stack data type and the functions that the stack data type supports:
Stack.h: typedef void * Stack; Stack stack_new(int size); void stack_free(Stack s); void stack_push(Stack s, int value); int stack_pop(Stack s);Note that I have prefaced all my function names with the "stack_" prefix so that I can avoid name conflicts with user selected names. C++ and Java have ways to avoid these name conflicts and they will be discussed later.
Next I create my stack.c file that contains the implementation for my stack data type:
#include "stack.h" #include <stdlib.h> typedef struct { int size; int *data; int top; } myStack; Stack stack_new(int size) { myStack *newStack = (myStack *)malloc(sizeof(myStack)); newStack->size = size; newStack->data = (int *)malloc(sizeof(int) * size); newStack->top = 0; return (Stack)newStack; /* cast myStack to a (void *) */ } void stack_push(Stack s, int value) { myStack *stack = (myStack *)s; if (stack->top == stack->size) return; /* should really do error handling */ stack->data[stack->top] = value; stack->top++; } ...Since myStack is declared locally and is not declared extern in the stack.h file, its scope is limited to stack.c. Hence only the functions in stack.c can manipulate the myStack data structure. The user is handed a (void *) which effectively hides a stack's implementation because there is no way for the user to cast the (void *) to a myStack. Whenever the user wants to manipulate the stack the user passes a (void *) to the appropriate stack function. The stack function can cast this (void *) to a myStack struct and manipulate the stack in any way it wishes.
This form of data encapsulation using void *'s is fairly kludgy but it does allow several files to share their implementation, as long as each file declares its local data structures in exactly the same way. For example, I could spread the stack implementation over two files by declaring a myStack struct locally in both files. The obvious drawback to this approach is that instead of having one central declaration for the stack's data structures I have one declaration per file, which makes it much more difficult and error-prone to change the data structures.
A positive aspect of the void * implementation is that you can hand a binary implementation to a third party without divulging any proprietary implementation knowledge because the third party will only see the void * in the .h file. Hence the third party will not even know what data structures you are using.
The public, protected, and private accessors in C++ provide a way to control access to the implementation of a class. Unfortunately, these accessors are "all" or "nothing" accessors, they either let everyone access the implementation or only subclasses to access the implementation. They do not provide a way to say "let classes A, B, and C have access to each other's implementation, but exclude everyone else."
C++'s developers partially address this problem by providing the friend keyword. A class can declare that other classes are its friends, which allow the other classes to examine the protected and private instance variables of the class. For example:
class ListNode { friend class List; ... };This declaration gives any method in List the ability to examine any variable in ListNode and to call any method in ListNode, regardless of whether or not the access protection is public.
Friendship has a number of klunky disadvantages. First it is not two way. When you declare List to be ListNode's friend, ListNode does not become a friend of List. List must explicitly declare ListNode to be a friend before the friendship becomes two way. Second, subclasses do not inherit a superclass's friendship status. For example, suppose you have the following subclass declaration:
class DList : public List { ... }DList is not considered a friend of ListNode, despite the fact that it is a subclass of a friend of ListNode.
These restrictions are incredibly annoying and really limit the effectiveness of friends in C++. First, if you want classes A, B, and C to share their implementation, you must ensure that all the classes mutually refer to each other as friends. Second, if you want their subclasses to also be friends, which invariably you do, then you have to make sure that the subclasses mutually refer to each other as friends. In general, if you want n classes to be friends, you will need n(n-1) friend declarations. In addition, if you add a new class to the system that should be included amongst the friends, then you must remember to add 2n more friend declarations. What a mess!
The second module-related concept in C++ is that of namespace's. The namespace keyword allows a programmer to specify that a certain set of variables, functions, and classes belong to the same library or "module". For example, a programmer might write:
namespace ibm { class Stack { ... }; class List { ... }; class ListNode { ... }; class Consult { ... }; ... } namespace apple { class Stack { ... }; class List { ... }; class ListNode { ... }; class Cut { ... }; ... }Notice that the same set of names have been re-used, but since they are in two different namespaces, that is ok. There are three common ways to access members of a namespace:
ibm::Stack *s = new ibm::Stack();
using ibm::Stack; Stack *s = new Stack();
using namespace ibm; Stack *s = new Stack();
If you import conflicting names into the same namespace it is problematic only if you try to use that name:
using namespace ibm; using namespace apple; Consult *c = new Consult(); // ok--no name conflict Stack *s = new Stack(); // compiler error because of a name conflictNamespaces can span one or more files, so you can still place declarations in a .h file and definitions in a .cpp file. For example, to declare the methods for ibm's Stack class one could use any of the following three styles in an ibm.cpp file:
C++ implements its standard template library using the std namespace. This library provides a number of pre-defined data structures, such as vectors and lists.
Namespaces solve another of C's problems, which is that all variable, function, and class names end up in the same global name space. This common grouping can create problems when you combine third party software from two different vendors, who duplicate one or more names, as shown above.
Unfortunately C++'s developers did not create true modules with the namespace keyword. Unlike Java's packages, C++'s namespaces do not provide a way to share implementation among members of the namespace. If ListNode and List are declared in the same namespace, they still cannot access one another's members without using the friend keyword. It would have been nice if they also added the concept of package-level access so that one could truly create modules in C++, but they didn't. As a result Java has a much more powerful module mechanism than C++.
class ListNode { friend class List; ... };
class DList : public List { ... }then DList is not considered a friend of ListNode. This restriction is incredibly annoying and really limits the effectiveness of friends in C++ | http://web.eecs.utk.edu/~bvz/teaching/cs365Sp17/notes/modules.html | CC-MAIN-2017-51 | en | refinedweb |
Free for PREMIUM members
Submit
Learn how to a build a cloud-first strategyRegister Now.
<TD align="CENTER"><A href="Packing_Slip_Progres
<IMG src="Images/btn_packing_sl
</TR>
or
<TD align="CENTER"><A href="Packing_Slip_Progres
<IMG src="Images/btn_packing_sl
</TR>
I've both way and none are working. I click to button once and quickly click it again. The system log my click both time.
onclick="this.style.cursor
Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.
<TD align="CENTER"><A href="Transactions_Main_Se
<IMG src="Images/btn_charge_dec
The idea is after you click the link once, an onclick handler is added that cancels any subsequent clicks. Hope that helps.
Thank you both!
is it new function( ) or just function( )
1) obj.onclick = function() {alert("hello world");};
2) obj.onclick = new Function("alert('hello')")
I prefer the first method because it looks a lot like functions you already know how to define, you just leave out the function's name (though you can also name the function if you really want to).
The second method actually creates a Function object which you can call. The constructor takes a comma delimited list of parameters followed by a string containing the function's body. In this case, we have no parameters, so we only need the function body as a string. But say you need a parameter, you'd do something like this:
increment = new Function("x", "return (x +1);");
You can now call increment as you would any other user-defined function; it takes a number and returns 1 plus that number.
I found this link a while ago, it might help explain what I'm talking about: As always, hope that helps. | https://www.experts-exchange.com/questions/21210052/How-to-disable-a-image-button.html | CC-MAIN-2017-51 | en | refinedweb |
On December 2011, there has been a release for both of them, 2.1.11, stable, and 3.1.0, beta. The 3.0.x development has been discontinued, for a number of reasons that you can read in the OMQ release report on github.
I am not using ZMQ for a production job at the moment, so I can do some experimentation with version 3, even if it is not marked as stable.
As usual, first thing to do is downloading the software and set the development environment up.
It didn't change much, as I have seen following my own installation notes I wrote for the 2.1.x version.
First step is going to the official download page on zeromq.org, and choose the version that suits you better.
I'm still working on Windows with MSVC 2010 so, once I downloaded the compressed package, I expanded it, and I opened the builds/msvc/msvc10.sln solution. And then was just a matter of building it to get a fresh copy of ØMQ lib and dll.
Then I created a new solution, where I ported the first very simple application that just say hello and show the current 0MQ version.
A few settings for the MSVC project:
In the VC++ Directories tab, I added the (...)\zeromq-3.1.0\include directory in the field Include Directories (as one would expected).
In the Linker - General tab, I added (...)\zeromq-3.1.0\lib\Win32 to the Additional Library Directories.
And in the Linker - Input tab, I added among the Additional dependencies the zmq library name (it varies if you are using a normal or debug version).
Remember to put the DLL in a path visible to the application at runtime.
Once all of this is done, we could write the code:
#include <iostream> #include <zmq.h> int main() { int major, minor, patch; zmq_version(&major, &minor, &patch); std::cout << "Hello from ZMQ " << major << '.' << minor << '.' << patch << std::endl; }Not a very creative use of ZeroMQ, but if it compiles, and if you get this output:
Hello from ZMQ 3.1.0You can assume everything works fine. | http://thisthread.blogspot.com/2012_01_01_archive.html | CC-MAIN-2017-51 | en | refinedweb |
Porting a NES emulator from Go to Nim2015-05-01 · Nim · Programming
Let me get this straight. We have an emulator for 1985 hardware that was written in a pretty new language (Go), ported to a language that isn’t even 1.0 (Nim), compiled to C, then compiled to JavaScript? And the damn thing actually works? That’s kind of amazing.
I spent the last weeks working on NimES, a NES emulator in the Nim programming language. As I really liked fogleman’s NES emulator in Go I ended up mostly porting it to Nim. The source code is so clean that it’s often easier to understand the internals of the NES by reading the source code than by reading documentation about it.
The choice of backend fell on SDL2 for me, contrary to GLFW + PortAudio that the Go version used. This was mainly motivated by the great portability promised by SDL2. Later we will see how porting to JavaScript and Android worked. If you’re impatient and want to play a game, there’s a JS demo.
Comparison of Go and Nim
Most Go concepts are quite trivial to translate to Nim. This made the porting process simple.
Let’s compare some data that I found interesting:
¹ Excluding go-glfw, go-gl and portaudio, which take 17 s to compile
² Emulation code only
It’s nice to see Nim doing well. Even the compile time is shorter than that of Go, which is well known for its short compile times. Now that the port seems to be doing fine and should be running on all Desktop platforms, let’s look into some other interesting things we can do with Nim:
JavaScript port via emscripten
Nim has a JavaScript backend, but I don’t trust it to be stable enough for this task yet. So I opted for emscripten instead, which can compile C code into JavaScript. Since Nim outputs C code, this sounds like a perfect fit. Luckily eeeee helped me with getting it started, since he had experience by porting my DDNet client to teewebs.net.
It turned out that emsdk is the easiest way to use emscripten:
$ ./emsdk update $ ./emsdk install latest $ ./emsdk activate latest $ source ./emsdk_env.sh
This may take a while, get a cup of tea. Afterwards we should have the
emconfigure,
emmake and
emcc commands available. We can build regular Nim programs and look at the resulting html file:
$ cat hello.nim echo "Hello World" $ nim --cc:clang --clang.exe:emcc --clang.linkerexe:emcc \ --cpu:i386 -d:release -o:hello.html c hello.nim $ ls -lha hello.{html,js} -rw-r--r-- 1 def users 101K Mai 1 19:02 hello.html -rw-r--r-- 1 def users 385K Mai 1 19:02 hello.js
That’s a pretty cumbersome building command, so we’ll slim it down later. The next step is to build SDL2 for emscripten:
$ hg clone $ cd SDL $ emconfigure ./configure --host=asmjs-unknown-emscripten \ --disable-assembly --disable-threads \ --enable-cpuinfo=false CFLAGS="-O2" $ emmake make $ ls -lha build/.libs/libSDL2.a -rw-r--r-- 1 def users 1.6M Apr 29 06:58 build/.libs/libSDL2.a
I put the resulting
libSDL2.a into the NimES repository under
emscripten/ for convenience.
Instead of increasing the cumbersomeness of our build command anymore, NimES’s nim.cfg specifies how to compile when
-d:emscripten is set:
@if emscripten: define = SDL_Static gc = none cc = clang clang.exe = "emcc" clang.linkerexe = "emcc" clang.options.linker = "" cpu = "i386" out = "nimes.html" warning[GcMem] = off passC = "-Wno-warn-absolute-paths -Iemscripten -s USE_SDL=2" passL = "-O3 -Lemscripten -s USE_SDL=2 --preload-file tetris.nes --preload-file pacman.nes --preload-file smb.nes --preload-file smb3.nes -s TOTAL_MEMORY=16777216" @end
Now a simple
nim -d:release -d:emscripten c src/nimes builds the JavaScript port. Note that I’m preloading a few ROMs so that they can be loaded. The HTML then uses the
?nes= parameter to pass the command line argument:
var argument; if (QueryString.hasOwnProperty("nes")) { argument = QueryString.nes; } else { argument = "smb3.nes"; } var Module; Module = { preRun: [], postRun: [], arguments: [argument], canvas: (function() { var canvas = document.getElementById('canvas'); canvas.addEventListener("webglcontextlost", function(e) { alert('WebGL context lost. You will need to reload the page.'); e.preventDefault(); }, false); return canvas; })(), totalDependencies: 0 };
Inside the Nim source code there are some interesting changes too. I quickly wrapped these functions as there is no emscripten wrapper for Nim yet:
when defined(emscripten): proc emscripten_set_main_loop(fun: proc() {.cdecl.}, fps, simulate_infinite_loop: cint) {.header: "<emscripten.h>".} proc emscripten_cancel_main_loop() {.header: "<emscripten.h>".}
Emscripten requires a slightly different execution style. Instead of actually looping, we define the main loop like this:
when defined(emscripten): emscripten_set_main_loop(loop, 0, 1) else: while runGame: loop()
That’s the main idea and with this we get a pretty playable web version of NimES. I’m still getting 60fps in it, but just barely on my machine. Chrome seems to do a bit better than Firefox.
Android port
Obviously the next step is to port NimES to Android as well. But since the original emulator is more accurate and nice than performant, we shouldn’t expect runnable speed. Think of this more as a proof of concept:
We need a fresh clone of the SDL2 repository for this as well as the Android SDK (12 or later) and NDK (7 or later) installed. SDL2 has building instructions for Android as well:
$ hg clone $ cd SDL/build-scripts $ ./androidbuild.sh org.nimes /dev/null $ ls ../build/org.nimes gen/ src/ build.properties local.properties jni/ AndroidManifest.xml build.xml proguard-project.txt res/ ant.properties default.properties project.properties
That’s our Android build directory now. I put this into the repository as well, under
android/. Now we can add some ROM to the
assets/ directory and tell Nim to put the resulting C files into the correct directory and not to build them into binaries at all:
@if android: cpu = "i386" nimcache = "./android/jni/src" compileOnly noMain @end
You may have noticed that I also defined
noMain. Instead we define our own main function, as SDL is a bit weird with mains. Thanks to yglukhov for this little trick:
when defined(android): {.emit: """ #include <SDL_main.h> extern int cmdCount; extern char** cmdLine; extern char** gEnv; N_CDECL(void, NimMain)(void); int main(int argc, char** args) { cmdLine = args; cmdCount = argc; gEnv = NULL; NimMain(); return nim_program_result; } """.}
Another trick is how to access the assets we embed into our APK. Luckily SDL2 provides functions for that, which we can use as replacements for the regular file operations:
from sdl2 import rwFromFile, read, freeRW proc newCartridge*(path: string): Cartridge = var file = rwFromFile(path.cstring, "r") defer: freeRW file var header: iNESHeader # Read directly into the header object if read(file, addr header, 1, sizeof header) != sizeof header: raise newException(ValueError, "header can't be read") ...
Finally we can build the project:
$ nim -d:release -d:android c src/nimes $ cd android $ ndk-build $ ant debug
And the end result is a nice nimes.apk. Of course it only shows some low FPS video for now and doesn’t even have any controls, but it’s a start.
Conclusion
In the end I’m quite happy with the result: A truly portable emulator written in my favorite language. It compiles to C, C++ as well as JavaScript and runs on any Desktop platform as well as JavaScript and Android. The process for this was much easier than expected, mostly thanks to Nim and SDL2. I see a bright future for Nim as a practical language.
If you have any comments, suggestions or questions, feel free to ask them on Hacker News or Reddit. | https://hookrace.net/blog/porting-nes-go-nim/ | CC-MAIN-2017-51 | en | refinedweb |
?
> That's a known problem together with the Subversion bindings,
But which isn't as far I have been able to work out because mod_python
has bugs as some still occasionally suggest, but because of how the
Subversion bindings are implemented. :-(
> but seeing
> now that weird error reported for cx_Oracle and your suggestion, I
> wonder if some other, equally weird error involving mod_python and
> mysql-python couldn't be explained by the same cause (see
>).
Looking at MySQLdb module, the error in that ticket relating to
'connect' not being found within the module looks to me totally
unrelated to issues as to what sub interpreter an application may be
running under within mod_python. This is because MySQLdb is a Python
module and 'connect' is just an attribute of that and should be
unchanging. The actual C code component of MySQLdb is at a lower level
and shouldn't really come into it.
What I would be focusing more on with that ticket is the code in
trac/db/mysql_backend.py at:
def __init__(self, path, user=None, password=None, host=None,
port=None, params={}):
import MySQLdb
....
if (self._mysqldb_gt_or_eq((1, 2, 1))):
cnx = MySQLdb.connect(db=path, user=user, passwd=password,
host=host, port=port, charset='utf8')
else:
cnx = MySQLdb.connect(db=path, user=user, passwd=password,
host=host, port=port, use_unicode=True)
Given that the import of MySQLdb is done within the function scope, I
presume it is in some way evaluated each time the function is called
and reference obtained via import mechanism. That the error comes up
sometimes suggests that the import mechanism is throwing back an empty
module reference, or the wrong module altogether and thus why
'connect' cannot be found..
In short, applying some basic debugging techniques to the problem in
that ticket should help to track it down if the original reporter was
willing to look into it properly. Unfortunately most people just want
a quick fix, preferably from some one else. ;-)
Graham | http://modpython.org/pipermail/mod_python/2007-March/023344.html | CC-MAIN-2017-51 | en | refinedweb |
Adding a Quandl Data Feed
Just started using backtrader today and ran into trouble trying to plot a quandl data feed. I keep getting this error "ValueError: posx and posy should be finite values"
The data is just a Pandas Dataframe which only contains 2 columns, Date and Values. I.E;
2009-01-31 0.000000 2009-02-01 1100.000000 2017-02-01 227771.619795 ... ...
So far this section in the docs seems like the likely solution for this but before I dive into this, I would like to know ahead of time if there are any caveats I should know about when working with this kind of data. I just want to add some MA's to it as part of a strategy. PyAlgoTrade seems to already include support for quandl data, fyi.
A brief code sample to understand how you actually load the
Dataframeas a data feed would be key to understand where
posxand
posyplay a role (for sure not inside backtrader ... probably inside
pandas)
The best way to load such a particular feed would be to follow this: Community - How to Feed Backtrader Alternative Data
The summary:
- A
datetimefield is always needed
- You can put your other column into any other field like
open,
high,
low,
volumenor
openinterest, which are already predefined, by passing the appropriate parameter during the creation of the data feed (if the name of the column matches any of those, it will be autodetected, else do something like
PandasData(dataname=df, open=1), where
1indicates that the column with index
1contains data to put into open
As a plus you could even override the lines hierarchy with
linesoverride. See this community post Community - Execute on bid/ask and/or this Blog - Escape from OHLC Land A
datetimefield is always needed.
@backtrader First I grab the dataframe using quandl.get(), then If I set the data's 'value' column as any of the params, i.e; volume. I get a pandas error:
File "pandas/index.pyx", line 65, in pandas.index.get_value_at (pandas/index.c:2759)
File "pandas/src/util.pxd", line 69, in util.get_value_at (pandas/index.c:16931)
IndexError: index out of bounds:
def strategy(datafeed): cerebro = bt.Cerebro() data = bt.feeds.PandasData(dataname=datafeed, datetime=None, high=None, low=None, open=None, close=None, volume=1, openinterest=None) cerebro.adddata(data) cerebro.run() cerebro.plot()
If I try to import the same data as CSV format the following code produces this error in matplotlib:
"ValueError: posx and posy should be finite values"
data = bt.feeds.GenericCSVData(dataname=datafeed, datetime=0, dtformat='%Y-%m-%d', tmformat=-1, high=-1, low=-1, open=-1, close=-1, volume=1, openinterest=-1)
@shawndaniel said in Adding a Quandl Data Feed:
datetime=None,
...
volume=1
If
datetimeis
Noneis because the dataframe has the timestamps in the index. If you only have
1column (seems so according to your data) it cannot for sure be
1
How your csv data looks is unknown. How you load it may be right or wrong.
- shawndaniel
@backtrader The data is shown in the original post.
The original post contains an excerpt of the printout of a
pandas.Dataframe, not the csv format.
But in any case the problem with
matplotlibis due to having only the presence of
volumeand the absence of any other component (
open,
high,
low). The volume is usually meant to be plotted as an overlay and if the other components aren't there, there is actually no axis (because there are no
xvalues)
The best and quick option is to put the value in the
closefield and let if be plotted as a line on close (which is the default)
If a plotting format like
bars(like in volume) were needed, some extra work would be needed without developing something (untested approach)
- Load the data
- Disable plotting of the data with
data.plotinfo.plot = False
- Create a Simple Moving Average on it with
period=1as in
ma = bt.ind.SMA(data, period=1)
- Tell the moving average to plot on its own:
ma.plotinfo.subplot = True
- The the line of the moving average to plot as bars:
ma.plotlines.sma = dict(_method='bar', width=1.0)(from
MACD)
@backtrader Thank you :) using it as the close instead of volume worked with GenericCSV method. I already tried setting the value as any of the params for the pandas method before but it didn't work (IndexError: index out of bounds) so it didn't cross my mind as a solution for the other method..Thanks again.
- Taewoo Kim
There's also the pandas method that should solve it.
FYI I couldn'get GenericCSV method to work either. So I had to feed data through pandas and use the pandas method
There is a native Quandl data feed (for the WIKI Data) starting with
1.9.48.116
- Maxim Korobov
A little patch:
if self.p.apikey is not None: urlargs.append('api_key=%s' % self.p.apikey)
There was a typo previously. | https://community.backtrader.com/topic/225/adding-a-quandl-data-feed | CC-MAIN-2017-51 | en | refinedweb |
class Date {
private int year;
private String month;
private int day;
public Date() {
month = "January";
year = 1999;
day = 1;
} //End of Constructor 1
public Date(int year, String month, int day) {
setDate(year, month, day);
} //End of Constructor 2
public Date(int year) {
setDate(year, "January", 1);
} //End of Constructor 3
public void setDate(int year, String month, int day) {
this.year = year;
this.month = month;
this.day = day;
} //End of Constructor 4
}
public class Calendar {
public static void main(String[] args){
Date date1 = new Date(2009, "March", 3);
Date date2 = new Date(2010);
Date date3 = new Date();
}
}
When you try to print object, its
toString() method is called which is inherited by all
java class from
Object class (superclass of all java class by default). So you will have to override
toString() method in your class
if you need some specific contents of the object to be printed. By default, this method prints
Class and its hash code. Since you have not overriden the
toString(), the printed string
contains object class and its hash code ( u.Date@15....).
Your constructor calls are determined by the argument you pass to the constructor.
Like in
date1, you passed 3 parameter of type
int,string and int in order.
This matches your constructor 2 arguments which are of
int, string and int.
So in in your date1 object consturction, constructor 2 is called.
similarly for date2, constructor 3 is called
and for date3, default constructor i.e consturcot 1 is called.
The "constructor 4" you marked is not a constructor, it is simply a method. Constructor do not have return type.
Again , to print as you expected in your question, override
toString() method in your class and format the result accordingly in that method to get expected result. | https://codedump.io/share/i0BLXmqwjKwH/1/java-beginner-confused-about-usage-of-constructors | CC-MAIN-2017-51 | en | refinedweb |
I'm trying to understand how the cycle of my "main.py" works. It's based on examples found on the net, about the PySide and Qt Designer, to implement a Python GUI.
The code is:
#***********************************#
# Python Libraries #
#***********************************#
from PySide.QtCore import *
from PySide.QtGui import *
import sys
import time
#***********************************#
# Python files #
#***********************************#
import Gui
from server import *
class MainDialog(QDialog, Gui.Ui_TCPServer):
def __init__(self, parent=None):
super(MainDialog, self).__init__(parent)
self.setupUi(self)
self.connect(self.ConnectBt, SIGNAL("clicked()"), self.ConnectBt_clicked)
self.connect(self.QuitBt, SIGNAL("clicked()"), self.QuitBt_clicked)
self.connect(self.DisconnectBt, SIGNAL("clicked()"), self.DisconnectBt_clicked)
print "NOW HERE\r\n"
def ConnectBt_clicked(self):
self.ConnectBt.setText("Connecting...")
self.server_connect()
print "THEN HERE\r\n"
def QuitBt_clicked(self):
self.close()
def DisconnectBt_clicked(self):
self.ConnectBt.setText("Connect")
self.server_off = ChronoRequestHandler()
self.server_off.finish()
def server_connect(self):
self.server_on = ServerStart()
self.server_on.try_connect()
if __name__ == '__main__':
app = QApplication(sys.argv)
form = MainDialog()
print "HERE\r\n"
form.show()
app.exec_()
print "END\r\n"
I think you should get more clear on how object programming and events work.
In the last if-statement (the code on the bottom that runs when you call your script from e.g. terminal) you create an app object instance of QApplication.
After that you create form, instance of MainDialog which is the class you define above (inheriting methods, properties, etc from two classes, QDialog, Gui.Ui_TCPServer).
By doing
form = MainDialog()
you run __init__, print "NOW HERE" and go out of that method. Please check what __init__ does in Python. why-do-we-use-init-in-python-classes
Before the end you call the exec() method of the app instance. This contains a loop so that your interface gathers and processes events. See the documentation on QApplication.exec() below.
When you press the 'ConnectBt' button you call the ConnectBt_clicked() method, which does stuff (connects with the server) and prints "THEN HERE".
In the same way, when you press QuitBt you call QuitBt_clicked(), which closes the connection and lets the code print "END".
I also suggest you read more documentation about the classes you are using. They will explain how come that the different buttons are "linked"/have as callbacks the methods ConnectBt_clicked(), def QuitBt_clicked(), and DisconnectBt_clicked(). The mechanisms by which the buttons trigger these callbacks is kind of implicit in the code implemented in those classes.
QApplication Class Reference: exec_, quit(), exit(), processEvents(), and QCoreApplication.exec(). | https://codedump.io/share/jS2XRAGwQraz/1/how-does-the-maindialog-cycle-work | CC-MAIN-2017-51 | en | refinedweb |
I'm looking to take the default ID key from the django model turn it into hexadecimal and display it on a page when the user sumbits the post, I've tried several of methods with no success can anyone point me in the right direction?
views.py
def post_new(request):
if request.method == "POST":
form = PostForm(request.POST)
if form.is_valid():
post = form.save(commit=False)
post.published_date = timezone.now()
post.save()
return redirect('post_detail', pk=post.pk)
else:
form = PostForm()
return render(request, 'books_log/post_edit.html', {'form': form})
Python's hex function is all you need here, but the problem is you can't call it directly from your template. So the solution is to add a method to your model.
class MyModel(models.Model): def to_hex(self): return hex(self.pk)
Then in your template
{{ my_object.to_hex() }} | https://codedump.io/share/kvcJ6Dxwqj8q/1/python-django-id-to-hex | CC-MAIN-2017-51 | en | refinedweb |
I read the article about singleton objects in scala but didn't find abything about if it's an instance of the class.
The following simple program tells that for this particular case it's true:
class TestMatch(val i: Int)
object TestMatch{
def apply(i: Int) = new TestMatch(i)
def unapply(tm : TestMatch): Option[Int] = Some(tm.i)
}
println(TestMatch.isInstanceOf[TestMatch]) //false
fruitless type test
objects are always an instance of an anonymous class whose body is the
object definition's body. If the definition does not include an
extends clause, that class inherits directly from
AnyRef.
So the only way the
TestMatch object could be an instance of the
TestMatch class would be if you wrote
object TestMatch extends TestMatch(something) {...}. Since you didn't do that, the
TestMatch is not an instance of
TestMatch in your code.
PS: The reason you're getting that warning for your test code is that Scala already knows at compile time exactly what the type of
TestMatch is, so it knows that the run-time test can only ever result in
false and is thus "fruitless". Generally you'd use run-time type tests when the exact type of something is not determined until run time and the test may thus may be true or false. | https://codedump.io/share/vzXlI5u9dgVR/1/is-companion-always-instance-of-the-class | CC-MAIN-2017-51 | en | refinedweb |
- Expand the test project node, in the Solution Explorer window
- Expand the References child node
- Right-click the assembly you wish to mole -- the context menu appears
- In the context menu, select the Add Moles Assembly option -- a .moles file appears in the text project
- Build the test project
Before building the test project, you should see a new file appear in the test project, named (TargetAssemblyName).moles. This file tells the compiler that in needs to create a mole assembly for the specified target assembly. After the build, you will see the mole assembly appear in the references node. This will be named after the target assembly, but affixed with ".moles".
It is important to remember that you must add the appropriate using statements and assembly attribute, to access the Moles framework from your unit tests:
using Microsoft.Moles.Framework; using Microsoft.VisualStudio.TestTools.UnitTesting; using MolesDemos; using MolesDemos.Moles; [assembly: MoledType(typeof(IMolesDemoClass))]
Also, test methods using Moles must be decorated with the HostType attribute:
[TestMethod] [HostType("Moles")] [ExpectedException(typeof(FileNotFoundException))] public void MyMoleTestMethod() { ... }
NOTE: After adding objects to the target code, it may be necessary to perform a rebuild on the test project. It things really get stuck:
- Clean both the target and test projects
- Delete the hidden "Moled Assemblies" folder and contents
- Rebuilt the target project
- Rebuild the test project
How do we mole the system assembly
To mole the System assembly (.NET Framework):
1. Right-click the References node of your test project. In the context menu, select "Add Moles Assembly for mscorlib". This allows you to detour the .NET Framework namespaces.
2. Add the using declaration for the mole assembly you wish to use. For example, to detour System.StringComparer, add declaration:
using System.Moles;
3. Compile the project.
4. Detour the method, in a moled test method. This detour always returns 0 (equal):
System.Moles.MStringComparer.AllInstances.CompareObjectObject = (instance, lhs, rhs) => 0;
I added a new post, to answer your question:
How to Mole the System Assembly | http://thecurlybrace.blogspot.com/2011/10/how-do-i-mole-assembly.html | CC-MAIN-2017-51 | en | refinedweb |
CDI - Injecting Classes at runtimehamsterdancer Mar 9, 2012 1:55 PM
Hi there
I'm working on a project, where it is needed to load some classes at runtime. The classes to load are parts of CDI-Containers and have to be able to inject some stuff. The "loading class" itself is a part of a CDI-Container as well.
Now comes my problem. It is possible to load and instantiate any class via reflection, but in this case it would not be possible for the classes to be loaded to get anything injected. So it is needed to get an instance of these classes as it would be internally done by the server like when we would use the annotation @javax.inject.Inject.
Is there any way to load the classes of another CDI-container in a way that they can still work with Injections (otherwise it would not make any sense^^)? Maybe there is any kind of Class which is responsible for for handling all of these classes so that I can simply tell it the name of the class to load (as I would do it with reflections)... ?
Thanks
1. Re: CDI - Injecting Classes at runtimeJason Porter Mar 9, 2012 2:09 PM (in response to hamsterdancer)
You're looking for code like what's in
2. Re: CDI - Injecting Classes at runtimeFeng Jiang Mar 12, 2012 11:38 AM (in response to hamsterdancer)
// Get BeanManager By JNDI
public static BeanManager getBeanManager() {
try {
return (BeanManager) new InitialContext()
.lookup("java:comp/BeanManager");
} catch (NamingException e) {
log.error("Can not get BeanManager!");
e.printStackTrace();
}
return null;
}
// Get the cls's instance
public static <T> T getBeanReference(Class<T> cls) {
Bean<?> myBean = getBeanManager().getBeans(cls).iterator().next();
return (T) getBeanManager().getReference(myBean, cls,
getBeanManager().createCreationalContext(myBean));
}
// Get the instance which with Annotation a
public static <T> T getBeanReference(Class<T> cls, final Class a) {
@SuppressWarnings("serial")
Bean<?> myBean = getBeanManager()
.getBeans(cls, new AnnotationLiteral() {
@Override
public Class annotationType() {
return a;
}
}).iterator().next();
return (T) getBeanManager().getReference(myBean, cls,
getBeanManager().createCreationalContext(myBean));
}
I hope above three method will help you | https://community.jboss.org/message/722915 | CC-MAIN-2015-32 | en | refinedweb |
embedly_cards 0.2.0
Pelican plugin for embedding external content using Embed.ly CardsEmbedly-Cards
===============
Embedly-cards is a Pelican_ plugin providing restucturedText directives to allow
easy embedding of external content using `Embed.ly Cards <http: embed.`_.
`ReST <http: iza.`_ and
`markdown <http: iza.`_
live examples can also be viewed from a Pelican-built website.
.. _Pelican:
Features
============
Embed content within a page or blog post easily, simply by specifying the URL of
the target page. Content is automatically recognised, extracted, and formatted as
a 'card'; this may contain a short article preview, embedded video, picture etc.
To preview a card, they can be generated online using `Embed.ly <http: embed.`_.
Almost any site is compatible, including YouTube, Flickr, Google+, Maps, Wordpress etc.
Installation
============
Embedly-cards can be installed using `pip`
.. code-block:: bash
$ pip install embedly-cards
or manually from the source code
.. code-block:: bash
$ python setup.py install
Once installed, simply add it to your ``pelicanconf.py`` configuration file:
.. code-block:: python
PLUGINS = [
# ...
'embedly_cards'
]
If you are planning on embedding content in markdown ``.md`` files,
you must also add it to the ``MD_EXTENSIONS`` options, like so:
.. code-block:: python
from embedly_cards import EmbedlyCardExtension
MD_EXTENSIONS = ['codehilite(css_class=highlight)',
'extra',
# ...
EmbedlyCardExtension()]
.. important::
If creating the ``MD_EXTENSIONS`` variable for the first time,
ensure that the Pelican ``'codehilite(css_class=highlight)'``
and ``'extra'`` markdown extensions are included in the list.
Usage
============
For example, to embed a YouTube video in ReStructuredText:
.. code-block:: ReST
.. embedly-card::
or in markdown:
.. code-block:: md
[!embedlycard]
Or to embed an article/webpage in ReStructuredText:
.. code-block:: ReST
.. embedly-card::
or in markdown:
.. code-block:: md
[!embedlycard]
Options
========
The ``card-chrome`` (ReST) or ``chrome`` (markdown) option, if provided, specifies
whether or not to preserve the border around the card. By default, the border
will be removed automatically *if Embed.ly supports it*; however to force the
border to remain, you may pass ``:card-chrome: 1`` (ReST),
.. code-block:: ReST
.. embedly-card::
:card-chrome: 1
or ``chrome=1`` (markdown):
.. code-block:: md
[!embedlycard?chrome=1]
- Downloads (All Versions):
- 10 downloads in the last day
- 84 downloads in the last week
- 263 downloads in the last month
- Author: Josh Izaac
- Download URL:
- License: GPLv3
- Provides embedly_cards
-: embedly_cards-0.2.0.xml | https://pypi.python.org/pypi/embedly_cards | CC-MAIN-2015-32 | en | refinedweb |
HOWTO Setup Android Development
From FedoraProject
Install one of these URLs, depending on your version of Eclipse. For Eclipse version 3.5 use:
or for Eclipse version 3.6 use:
For Eclipse version 3.7 (Fedora 16 and current Rawhide (as of Oct. 10, 2011)), use:
For Eclipse version 4.2 (Fedora 17 and current Rawhide (as of Jun. 6, 2012)), use:
If you're unsure which version of Eclipse you are using, check it at Help > About Eclipse..
- Back in the Available Software view,-tools export PATH
- Logout and login back to apply path change
Android Emulator
32 bit packages
#
AVD device
- cd into the ~/AndroidSDK directory and run tools/android to configure and create your first Android Virtual Device.
- Go to "Available Packages", select components for just those versions of Android you want to work with. For example:
- SDK Platform Android 2.1
- Documentation for Android SDK
- (SDK version r_08) For the adb tool, make sure you also select:
- Platform Tools
- Click on "Install selected", then click on "accept all" and confirm with clicking on "Install". This will start component installation, when it will be done, click on close. When this will be done, we could proceed with creation of AVD device itself.
- Go to "Virtual Devices", Click on "New", this will open screen where you need to specify SD card size (I will use 62MiB), name of device (I will use "android_dev1", target (Android 2.1, if you want to develop for different target, you need to go to step 2 and install SDK platform for different version).
- Now click on "Create AVD" which will create Android Virtual Device.
Running Emulator
Now we have created Android Virtual Device and we should start it, however, due to issues in AndroidSDK with sound, we will need to run it from command line
./emulator -noaudio -avd android_dev1
And this will start emulator for us.
Hello Fedora
Configure Android in Eclipse
- Go to Window -> Preferences, click on Android and set SDK location to directory. (for example /home/user/AndroidSDK) and click on Apply.
- Click on apply to reload available targets
- choose target android SDK
- click on OK
Create a New Android Project
After you've created an AVD, the next step is to start a new Android project in Eclipse.
- From Eclipse, select File > New > Project. If the ADT Plugin for Eclipse has been successfully installed, the resulting dialog should have a folder labeled "Android" which should contain "Android Project". (After you create one or more Android projects, an entry for "Android XML File" will also be available.)
- Select "Android Project" and click Next.
- On next screen type Project Name ("HelloFedora"), Application name (Hello, Fedora), package name (com.example.hellofedora) which represent your namespace and name of activity in "Create Activity" box (HelloFedora). Choose target (if you have multiple targets) and click on "Finish". This will create project for you.
Development and Execution
- open HelloFedora.java and paste there example code from Hello Fedora Code section.
- click on windows -> preferences. In new window, open Android -> Launch and into "Options" text box insert "-noaudio"
- open separate console, cd ~/AndroidSDK/tools and execute ./emulator -noaudio @android_dev1 to start emulator. Wait for start of emulator (it could take several minutes)
- in eclipse, click on "run" and it will deploy application into Android Virtual Device.
Hello Fedora Code
package com.example.hellofedora; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloFedora extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); tv.setText("Hello, Android Developer\n Thank you, for using Fedora Linux"); setContentView(tv); } }
yum install gcc gcc-c++ gperf flex bison glibc-devel.{x86_64,i686} zlib-devel.{x86_64,i686} ncurses-devel.i686 libsx-devel readline-devel.i686 perl-Switch
-) | https://fedoraproject.org/w/index.php?title=HOWTO_Setup_Android_Development&oldid=316147 | CC-MAIN-2015-32 | en | refinedweb |
SafeHandle class in the System.Runtime.InteropServices namespace is an abstract wrapper class for operating system handles. Deriving from this class is difficult. Instead, use the derived classes in the Microsoft.Win32.SafeHandles namespace that provide safe handles for the following:
Files and pipes.
Memory views.
Cryptography constructs.
Registry keys.
Wait handles..
Imports System Imports System.IO Class Program Public Shared Sub Main() Try ' Initialize a Stream resource to pass ' to the DisposableResource class. Console.Write("Enter filename and its path: ") Dim fileSpec As String = Console.ReadLine Dim fs As FileStream = File.OpenRead(fileSpec) Dim TestObj As DisposableResource = New DisposableResource(fs) ' Use the resource. TestObj.DoSomethingWithResource() ' Dispose theresource. TestObj.Dispose() Catch e As FileNotFoundException Console.WriteLine(e.Message) End Try End Sub End Class ' Implements IDisposable Private _resource As Stream Private _disposed As Boolean ' The stream passed to the constructor ' must be readable and not null. Public Sub New(ByVal stream As Stream) MyBase.New() If (stream Is Nothing) Then Throw New ArgumentNullException("Stream is null.") End If If Not stream.CanRead Then Throw New ArgumentException("Stream must be readable.") End If _resource = stream Dim objTypeName As String = _resource.GetType.ToString _disposed = False End Sub ' Demonstrates using the resource. ' It must not be already disposed. Public Sub DoSomethingWithResource() If _disposed Then Throw New ObjectDisposedException("Resource was disposed.") End If ' Show the number of bytes. Dim numBytes As Integer = CType(_resource.Length, Integer) Console.WriteLine("Number of bytes: {0}", numBytes.ToString) End Sub Public Overloads Sub Dispose() Implements IDisposable.Dispose Dispose(True) ' Use SupressFinalize in case a subclass ' of this type implements a finalizer. GC.SuppressFinalize(Me) End Sub Protected Overridable Overloads Sub Dispose(ByVal disposing As Boolean) If Not _disposed Then ' If you need thread safety, use a lock around these ' operations, as well as in your methods that use the resource. If disposing Then If (Not (_resource) Is Nothing) Then _resource.Dispose() End If Console.WriteLine("Object disposed.") End If ' Indicates that the instance has been disposed. _resource = Nothing _disposed = True End If End Sub End Class | https://msdn.microsoft.com/en-us/library/fs2xkftw(d=printer,v=vs.100).aspx?cs-save-lang=1&cs-lang=vb | CC-MAIN-2015-32 | en | refinedweb |
Debugging
From HaskellWiki.
You must keep in mind that due to lazy evaluation your traces will only print if the value they wrap is ever demanded.
A more powerful alternative for this approach is Hood. Even if it hasn't been updated in some time, Hood works perfectly with the current ghc distribution. Even more, Hugs has it already integrated, see the manual page. Add an import Observe and start inserting observations in your code. For instance:
import Hugs.Observe
3 Dynamic breakpoints in GHCi
Finally, the GHC/GHCiDebugger project aims to bring dynamic breakpoints and intermediate values observation to GHCi in a near future. Right now the tool is only available from the site as a modified version of GHC, so unfortunately you will have to compile it yourself if you want to have it.
This tool allows to set breakpoints in your code, directly from the GHCi command prompt. An example session:
*main:Main> :break add Main 2 Breakpoint set at (2,15) *main:Main> qsort [10,9..1] Local bindings in scope: x :: a, xs :: [a], left :: [a], right :: [a] qsort2.hs:2:15-46> :sprint x x = _ qsort2.hs:2:15-46> x This is an untyped, unevaluated computation. You can use seq to force its evaluation and then :print to recover its type qsort2.hs:2:15-46> seq x () () qsort2.hs:2:15-46> :p x x - 10
Once a breakpoint is hit, you can explore the bindings in scope, as well as to evaluate any haskell expression, as you would do in a normal GHCi prompt. The ':print' command can be very useful to explore the lazyness of your code.
4 Catching Assert trick
5 Other tricks
- If you use GHC, you can get a stack trace in the console when your program fails with an error condition. See the manual page | https://wiki.haskell.org/index.php?title=Debugging&oldid=5856 | CC-MAIN-2015-32 | en | refinedweb |
{-# OPTIONS_GHC -XNoImplicitPrelude #-} ----------------------------------------------------------------------------- -- | -- Module : Unsafe.Coerce -- Copyright : Malcolm Wallace 2006 -- License : BSD-style (see the LICENSE file in the distribution) -- -- Maintainer : libraries@haskell.org -- Stability : experimental -- Portability : portable -- --' is just a -- trivial wrapper). -- -- * In nhc98, the only representation-safe coercions are between Enum -- types with the same range (e.g. Int, Int32, Char, Word32), -- or between a newtype and the type that it wraps. module Unsafe.Coerce (unsafeCoerce) where #if defined(__GLASGOW_HASKELL__) import GHC.Prim (unsafeCoerce#) unsafeCoerce :: a -> b unsafeCoerce = unsafeCoerce# #endif #if defined(__NHC__) import NonStdUnsafeCoerce (unsafeCoerce) #endif #if defined(__HUGS__) import Hugs.IOExts (unsafeCoerce) #endif | https://downloads.haskell.org/~ghc/6.12.1/docs/html/libraries/base-4.2.0.0/src/Unsafe-Coerce.html | CC-MAIN-2015-32 | en | refinedweb |
Following is a Java version of the
train example.
This is a multi threaded version of the
train example. In this
mode several threads can access the SICStus runtime via a
Prolog
interface. The static method
Jasper.newProlog() returns an
object that implements a
Prolog interface. A thread can make
queries by calling the query-methods of the Prolog object.
The calls will be sent to a separate server thread that does the actual
call to SICStus runtime.
// MultiSimple.java
import se.sics.jasper.Jasper; import se.sics.jasper.Query; import se.sics.jasper.Prolog; import java.util.HashMap; public class MultiSimple { class Client extends Thread { Prolog jp; String qs; Client(Prolog p,String queryString) { jp = p; qs = queryString; } public void run() { HashMap WayMap = new HashMap(); try { synchronized(jp) { Query query = jp.openPrologQuery(qs, WayMap); try { while (query.nextSolution()) { System.out.println(WayMap); } } finally { query.close(); } } } catch ( Exception e ) { e.printStackTrace(); } } }
{ try { Prolog jp = Jasper.newProlog(argv,null,"train.sav"); Client c1 = new Client(jp,"connected('Örebro', 'Hallsberg', Way1, Way1)."); c1.start(); // The prolog variable names are different from above // so we can tell which query gives what solution. Client c2 = new Client(jp,"connected('Stockholm', 'Hallsberg', Way2, Way2)."); c2.start(); } catch ( Exception e ) { e.printStackTrace(); } } public static void main(String argv[]) { new MultiSimple(argv); } }
Prologobject
jpis the interface to SICStus. It implements the methods of
interface Prolog, making it possible to write quite similar code for single threaded and multi threaded usage of Jasper. The static method
Jasper.newProlog()returns such an object.
Jasper.newPrologis the .sav file to restore. Two threads are then started, which will make different queries with the
connectedpredicate.
openPrologQueryis not recommended in multi threaded mode, but if you must use it from more than one Java thread, you should enclose the call to
openPrologQueryand the closing of the query in a single synchronized block, synchronizing on the Prolog object. See SPTerm and Memory for details on one of the reasons why this is necessary. | https://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/Multi-Threaded-Example.html | CC-MAIN-2015-32 | en | refinedweb |
Toy compression implementations
From HaskellWiki
(Difference between revisions)
Revision as of 16:46, 15 February 2007)
module Compression where import Data.List -- Run-length encoding encode_RLE :: (Eq x) => [x] -> [(Int,x)] encode_RLE = map (\xs -> (length xs, head xs)) . groupBy (==) decode_RLE :: [(Int,x)] -> [x] decode_RLE = concatMap (uncurry replicate) -- Limpel-Ziv-Welsh compression (Recommend using [Word8] or [SmallAlpha] for input!) encode_LZW :: (Eq x, Enum x, Bounded x) => [x] -> [Int] encode_LZW [] = [] encode_LZW (x:xs) = work init [x] xs where init = map (\x -> [x]) $ enumFromTo minBound maxBound -- TODO: Matching decode_LZW function. -- TODO: Huffman encoding. -- TODO: Arithmetic coding.
It may also be useful to add the following for test purposes:
Anybody know how to use
import Data.Word data SmallAlpha = AA | BB | CC | DD deriving (Show, Eq, Ord, Enum, Bounded) parse1 'a' = AA parse1 'b' = BB parse1 'c' = CC parse1 _ = DD -- For safety parse = map parse1
to make a type like
newtype
but with
Char
and
minBound
much closer together?
maxBound | https://wiki.haskell.org/index.php?title=Toy_compression_implementations&diff=11324&oldid=11323 | CC-MAIN-2015-32 | en | refinedweb |
LayoutRect
Creates a container element for document content in a print or print preview template.
Remarks
LAYOUTRECT elements define the area or areas (and their styles) on a page where a document's content is displayed when printed or during print preview. In a print template, LAYOUTRECT elements are contained by DEVICERECT elements, which define the printable area of the print template. A DEVICERECT can contain more than one LAYOUTRECT.
A LAYOUTRECT element is intended for use when building a print template. While this element renders on a Web page, most of its functionality is disabled when it is used outside a print template.
A print template typically has a series of connected LAYOUTRECTs into which a source document can flow as it is rendered for printing or previewing. The first LAYOUTRECT in the series defines the source of the content by specifying a contentSrc property or attribute. ContentSrc can be set to the string "document" to indicate that the current document displayed should be used as the source, or it can be set to a URL specifying another source. The LAYOUTRECT element also has a nextRect attribute or property to specify another LAYOUTRECT into which the source content should continue to flow once the current LAYOUTRECT is full. Each LAYOUTRECT in the series, except the last, defines a nextRect pointing to the next LAYOUTRECT in the series.
A print template usually handles documents of various lengths. It must provide enough LAYOUTRECT elements to accommodate an arbitrary amount of content. To accomplish this, use script to create LAYOUTRECTs dynamically, as a source document loads, by specifying an onlayoutcomplete event handler for each LAYOUTRECT. The event handler should check the event's contentOverflow property. When the contentOverflow property is true, the handler can create a new LAYOUTRECT with an id, an onlayoutcomplete handler, and a nextRect attribute pointing to the next LAYOUTRECT element in the series. The onlayoutcomplete event can fire more than once on a single LAYOUTRECT. For this reason, the event handler must cancel itself once it has been called, to prevent the handler from firing more than once.
A LAYOUTRECT element must have a style that defines a width and height; the default value for each of these properties is zero. A print template can obtain the current page setup information by querying the TemplatePrinter behavior properties, including marginBottom, marginLeft, marginRight, marginTop, pageWidth and pageHeight. A LAYOUTRECT style cannot redefine styles within the content source; for instance, it cannot redefine font-family or font-weight.
When using the LAYOUTRECT element, you must prefix it with an XML namespace. Declare the namespace using the IMPORT processing instruction. For example, the namespace "IE" can be declared by using the following statement:
The LAYOUTRECT element syntax to use with this namespace is <IE:LAYOUTRECT ... />.
The LAYOUTRECT element is an unscoped element; that is, it does not have a closing tag. It must have a forward slash (/) before its closing bracket. example shows some of the basic elements of a print preview template, including the onlayoutcomplete event handler. Note that the LAYOUTRECT elements in this example define their nextRect attributes before the next LAYOUTRECT is added to the series. This template supports print preview, but not printing. For information on printing support, see the TEMPLATEPRINTER reference page.
<HTML XMLNS:IE> <HEAD> <?import namespace="ie" implementation="#default"> <STYLE type="text/css"> .layoutstyle { margin:1in; width:4in; height:4in; } .pagestyle { border:1 solid black; width:8.5in; height:11in; margin:5px; } </STYLE> <SCRIPT language="JScript"> index = 1; function OnRectComplete() { if (event.contentOverflow == true) { document.all("LRect" + index).onlayoutcomplete = null;"; newHTML += "</IE:DEVICERECT>"; pagecontainer.insertAdjacentHTML("beforeEnd", newHTML); index++; } } </SCRIPT> <HEAD> <BODY> <DIV id="pagecontainer"> <IE:DEVICERECT <IE:LAYOUTRECT </IE:DEVICERECT> </DIV> </BODY> </HTML>
Requirements
See also
- Reference
- DeviceRect
- TemplatePrinter
- HeaderFooter
- dialogArguments
- IDM_PRINT
- IDM_PRINTPREVIEW
- onlayoutcomplete
- Other Resources
- Beyond Print Preview: Print Customization for Internet Explorer 5.5
- Print Preview 2: The Continuing Adventures of Internet Explorer 5.5 Print Customization | https://msdn.microsoft.com/en-us/library/Aa969430.aspx | CC-MAIN-2015-32 | en | refinedweb |
With need.
Solid was implemented using a frontend/backend approach aiming portability among platforms like Linux and Windows. The frontend provides the high-level API for developers using Solid and backends deal with the specific hardware issues for the different platforms.
The frontend classes provide the API for developers and are also wrappers for several kinds of devices. All frontend clases are available in kdelibs/solid/solid.
The device notifier is a singleton used to obtain information about the hardware available on the system. It provides to applications the unique entry point for hardware discovery and/or notifications about changes (with the use of Solid::DeviceNotifier::deviceAdded(const QString) and Solid::DeviceNotifier::deviceRemoved(const QString) signals). This class calls the following Solid::DeviceManager.
This (private) class maintains the state of all devices currently available on the system. Through it is possible to get, for example, the list of all devices or a list of device matching with some criteria (using Solid::Predicate).
This class represents a general hardware device. A device contains one or more device interfaces (capabilities).
A device interface represents a certain feature that a device contains. This class is on the top of the device interfaces inheritance tree. All specialized device interfaces implement it.
These classes actually represent the capabilities that a device can have. All classes extend Solid::DeviceInterface.
A Solid backend deal with the platform-specific ways of handle devices. Developers using libsolid do not use the backend classes directly. Applications do it through frontend/wrapping classes in the Solid namespace.
All backends have to implement the interfaces in kdelibs/solid/solid/ifaces (namespace Solid::Ifaces) correspondent to its devices. These interfaces are the basics of which API a given device has to provide to the frontend classes.
This diagram shows the relationships between the Solid frontend classes and the platform-specific backend classes (Foo backend). | https://techbase.kde.org/index.php?title=Development/Architecture/KDE4/Solid&direction=next&oldid=72922 | CC-MAIN-2015-32 | en | refinedweb |
import all of the how-to articles from the pkgsrc.se wiki.. | https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/how_to_use_thumb_mode_on_arm.mdwn?rev=1.1;content-type=text%2Fx-cvsweb-markup | CC-MAIN-2015-32 | en | refinedweb |
JScript .NET, Part III: Classes and Namespaces: Packaging A Namespace - Doc JavaScript
JScript .NET, Part III: Classes and Namespaces
Packaging a Namespace
You create a namespace with the
package statement. It combines one or more classes into a logical group called a namespace. Let's look at an example:
// Create a simple package containing a class with // a single field (President). package USA { class Head { static var President : String = "Bush"; } }; // Create another simple package containing two classes. // The class Head has the field PrimeMinister. // The class Localization has the field Currency. package UK { public class Head { static var PrimeMinister : String = "Blair"; } public class Localization { static var Currency : String = "Pound"; } }; // Use another package for more specific information. package USA.Florida { public class Head { static var Governor : String = "Bush"; } }; // Declare a local class that shadows the imported classes. class Head { static var Governor : String = "Davis"; } // Import the USA, UK, and USA.Florida packages. import USA; import UK; import USA.Florida; // Access the package members with fully qualified names. print(Head.Governor); print(USA.Head.President); print(UK.Head.PrimeMinister); print(USA.Florida.Head.Governor); // The Localization class is not shadowed locally, // so it can be accessed with or without a fully qualified name. print(Localization.Currency); print(UK.Localization.Currency);
Here is the output of the above code:
Davis Bush Blair Bush Pound Pound
Notice that when the class location is ambiguous, you must use fully-qualified names. The class
Head, for example, appears in
USA,
UK,
USA.Florida, and locally, so the namespace must prefix this class. The class
Localization, however, appears only in the
UK namespace, so there is no need to use the fully-qualified variable names.
Next: How to load assemblies
Produced by Yehuda Shiran and Tomer Shiran
Created: May 6, 2002
Revised: May 6, 2002
URL: | http://www.webreference.com/js/column109/4.html | CC-MAIN-2015-32 | en | refinedweb |
Integrating SailFin-CAFE with OpenIMS
By Mohit Gupta on Sep 03, 2009
This entry demonstrates the procedure for installing OpenIMS and integrating Sailfin-CAFE with it. The attached sailfin-cafe application could then be used to establish a call between two clients registered with open-ims.
Overview
- Install OpenIMS core.
- Install Sailfin-CAFE, integrate it with OpenIMS.
- Register IMS clients with OpenIMS core.
- Deploy the cafe-app.
- Establish call between the clients.
Lets get started
Installing OpenIMS core
-- Install instructions can be found here.
Note: If you are running the DNS on the same machine as OpenIMS core, then while configuring the DNS server, edit the file /etc/dhcp3/dhclient.conf and uncomment this line: prepend domain_name_servers 127.0.0.1;
-- Start OpenIMS core components viz: pcscf, scscf, icsfc and FHoSS startup as given in the installation guide.
Setting up SailFin and SailFin-CAFE
-- Download and install SailFin from here.
-- Use SailFin CAFE promoted build 05 or latest. The download and install instructions are available here.
-- Register SailFin with OpenIMS core:
- Go to and enter hssAdmin as username & hss as password. This will lead you to FHoSS - FOKUS Home Subscriber Server webpage.
- Click on create(Application Server) under the servcies tab.
- The server setting parameters will be: Server Name - sip:hostname:5060, DiameterFDQN - hostname, Default Handling - Session-continued.
- Attach the default_ifc to the newly created server.
IMSClient setup
-- UCT IMS Client can be dowloaded at
-- OpenIMS Core has Alice and Bob as the users registered by default. Use two different IMS clients to login as Alice and Bob respectively.
Deploying test application and establishing the call
-- Download and deploy the sample application using sailfin asadmin deploy.
-- Call Establishing logic of the test App:
public class NewServlet extends HttpServlet {
@Context CommunicationSession session;
----------
protected void processRequest(HttpServletRequest request, HttpServletResponse response) {
----------
Call call = session.createCall(party1);
call.addParticipant(party2);
-----------
}
}
-- Access the url :
-- Enter user ids : Alice@open-ims.test and Bob@open-ims.test
-- Click on the call button and listen the IMS Clients ringing !!
Screencast
The screencast for the complete setup can be viewed here !! | https://blogs.oracle.com/mohitg/tags/open-ims | CC-MAIN-2015-32 | en | refinedweb |
Type Classes With An Easier Example
I recently wrote this post. It contained a lot of rather obtuse mathematics, just to introduce what was basically the example problem for the article. That’s because it was a real honest-to-goodness problem I was solving and writing papers on for algebra journals… admittedly, not the best choice for a blog aimed partly at non-mathematicians. Here, I do something similar, but with a much simpler and more general purpose problem.
Overview
We’ll be playing with converting matrices to an upper triangular form, essentially using Gaussian elimination. As a reminder, here’s what that means (I’m simplifying a little):
Goal: I have a matrix, and I’d like it to be upper triangular. (Upper triangular means that everything below the main diagonal is zero.)
Rules: I’ll allow myself to do these things to the matrix: swap any two rows or columns, or add any multiple of a row or column to another one.
Strategy: I’ll look for a non-zero element in the first column. If there is one, I’ll swap rows to move it to the first row of the matrix. Then I’ll add multiples of the first row to all of the other rows, until the entire rest of the first column is equal to zero. Then (here’s where we get a little tricky) I can just do the same thing on the rest of the matrix, ignoring the first row and column. In other words, I’ve reduced the problem to a smaller version of itself. (Yep, it’s a recursive algorithm.)
The Trick Up Our Sleeve: Instead of writing our function to operate directly on some representation of matrices, we’ll make it work on a type class. This will let us play all sorts of cool tricks.
Preliminary Stuff
I’d like to be able to declare instance for my matrices without jumping through newtype hoops, so I’ll start with a language extension.
{-# LANGUAGE TypeSynonymInstances #-}
Imports:
import Data.Maybe
Next, I need a few easy utility functions on lists. There’s nothing terribly interesting here; just a swap function, and a function to apply a function to the nth element of a list.
swap :: Int -> Int -> [a] -> [a] swap i j xs | i == j = xs | i > j = swap j i xs | otherwise = swap' i j xs where swap' 0 j (x:xs) = let (b,xs') = swap'' x (j-1) xs in b : xs' swap' i j (x:xs) = x : swap' (i-1) (j-1) xs swap'' a 0 (x:xs) = (x, a:xs) swap'' a j (x:xs) = let (b,xs') = swap'' a (j-1) xs in (b, x:xs') modifynth :: Int -> (a -> a) -> [a] -> [a] modifynth _ _ [] = [] modifynth 0 f (x:xs) = f x : xs modifynth n f (x:xs) = x : modifynth (n-1) f xs
Finally, I need a type to represent matrices. Since this is just toy code where I don’t need really high performance, a list of lists will do just fine.
type Matrix = [[Double]]
All done. On to the interesting stuff.
Building a Type Class
I already mentioned that I don’t want to operate directly on the representation of a matrix as a list of lists. Instead, I’ll declare a type class capturing all of the operations that I’d like to be able to perform.
class Eliminable a where (@@) :: a -> (Int,Int) -> Double size :: a -> Int swapRows :: Int -> Int -> a -> a swapCols :: Int -> Int -> a -> a addRow :: Double -> Int -> Int -> a -> a addCol :: Double -> Int -> Int -> a -> a
I’ve reserved the operator @@ to examine an entry of a matrix, size to give me its size, and then included functions to swap rows, swap columns, add a multiple of one row to another, and add a multiple of one column to another. Now I just need an implementation for the concrete matrix type I declared earlier.
instance Eliminable Matrix where m @@ (i,j) = m !! i !! j size m = length m swapRows p q m = swap p q m swapCols p q m = map (swap p q) m addRow k p q m = modifynth q (zipWith comb (m!!p)) m where comb a b = k*a + b addCol k p q m = map (\row -> modifynth q (comb (row!!p)) row) m where comb a b = k*a + b
Done.
Programming With Our Type Class
Recall that the substantial portion of the elimination algorithm earlier was to zero out most of the first column of the matrix, leaving only the top element possibly non-zero. We’re now in a position to implement this piece of the algorithm. It’s not all that tricky.
zeroCol :: Eliminable a => a -> a zeroCol m = let clearRow j m' = addRow (-(m' @@ (j,0) / m' @@ (0,0))) 0 j m' clearCol m' = foldr clearRow m' [1 .. size m - 1] in case listToMaybe [ i | i <- [0 .. size m - 1], m @@ (i,0) /= 0 ] of Nothing -> m Just 0 -> clearCol m Just i -> clearCol (swapRows 0 i m)
This function looks only at the first column of the matrix, and clears it out my moving a non-zero element to the top, and then adding the right multiple of that column to all those below it. The important thing to notice is that this was implemented for any arbitrary instance of the type class I called “Eliminable”. This will be incredibly useful in the next few steps.
Using the Type Class
The next part of the algorithm is to ignore the first row and column, and perform the same operation on the submatrix obtained by deleting them. It’s actually a bit unclear how to implement this. We have a few options:
- Modify zeroCol above, to have it take a parameter representing the current column, and do everything relative to the current column. This is pretty messy. It actually might not be too messy in this case, but if the algorithm I were implementing were a little less trivial to begin with, It might definitely be quite messy.
- Actually perform the elimination on a separate matrix, and then somehow graft the first row and column from this matrix onto that one. Again, this could get pretty messy in general.
- Change the representation of the submatrices.
I’ll choose the third. Luckily, this isn’t too tough, since we have a type class. I’ll just define a newtype, and a new instance, to encapsulate the idea of a matrix with the first row and column deleted.
newtype SubMatrix a = SubMatrix { unwrap :: a } instance Eliminable a => Eliminable (SubMatrix a) where (SubMatrix m) @@ (i,j) = m @@ (i+1,j+1) size (SubMatrix m) = size m - 1 swapRows p q (SubMatrix m) = SubMatrix (swapRows (p+1) (q+1) m) swapCols p q (SubMatrix m) = SubMatrix (swapCols (p+1) (q+1) m) addRow k p q (SubMatrix m) = SubMatrix (addRow k (p+1) (q+1) m) addCol k p q (SubMatrix m) = SubMatrix (addCol k (p+1) (q+1) m)
Using this new instance, I can easily complete the elimination algorithm.
eliminate :: Eliminable a => a -> a eliminate m | size m <= 1 = m | otherwise = unwrap . eliminate . SubMatrix . zeroCol $ m
Yep, that’s all there is to it, and we have a working elimination algorithm.
Type Class Games
Suppose, now, that I want a lower triangular matrix. It might initially seem that I’m out of luck; I need to write all this code again. That turns out not to be the case, though. If I just teach the existing code how to operate on the transpose of a matrix instead of the matrix I’ve given it, then all is well! Here goes.
newtype Transposed a = Transposed { untranspose :: a } instance Eliminable a => Eliminable (Transposed a) where (Transposed m) @@ (i,j) = m @@ (j,i) size (Transposed m) = size m swapRows p q (Transposed m) = Transposed (swapCols p q m) swapCols p q (Transposed m) = Transposed (swapRows p q m) addRow k p q (Transposed m) = Transposed (addCol k p q m) addCol k p q (Transposed m) = Transposed (addRow k p q m)
To implement the lower triangular conversion, now, is simple.
lowerTriang :: Eliminable a => a -> a lowerTriang = untranspose . eliminate . Transposed
Any number of changes to the operation we’re trying to perform can often be expressed by simply substituting a different representation for the type on which we’re performing the operation. (Thinking about this fact can actually get pretty deep.)
Side Calculations
There’s a fairly common problem that many people run into when moving from an imperative language to a functional one. This can apply to learning functional programming, converting existing imperative code, or even just translating the concepts in one’s mind when talking to someone who thinks imperatively. The problem goes something like this: you have some code that performs some computation, and now you want to change the code to add some new concept to the existing computation. Often, the new idea you’re trying to add could be performed trivially in an imperative language, by adding print statements to some function seven layers in, or by keeping track of some value in a global variable, or in some field of some object. In the functional setting, these aren’t available to you.
The minimalist answer is simply to add all the plumbing code; new parameters and return values, etc. to every function in the entire call tree. To say the least, this is unappealing! To a new Haskell programmer, the obvious answer often looks like monads. However, again, the entire call tree has to be rewritten in a monadic style, and besides, this is a tad like using a rocket launcher to rid the house of mice.
The solution I propose here is that many times, it’s sufficient to use a type class. Here’s an example.
Problem: Calculate the determinant of a matrix efficiently.
Determinants can be calculated in a lot of different ways, but one of the most common uses elimination. The interesting fact here is that once you’ve got a triangular matrix (lower or upper; doesn’t matter), then its determinant is just the product of its diagonal elements. Furthermore, we know precisely what happens to the determinant when you swap rows or columns (it flips sign, but the magnitude stays the same), or when you add a multiple of one row or column to the other (it stays the same). So a (very fast) way to calculate a determinant is to perform elimination, but also keep track, at each step, of what you’ve done to the determinant so far.
So now we need, not merely a matrix, but a pair consisting of a matrix and some side information – namely, which change we’ve made so far to the determinant.
data WithDeterminant a = WithDeterminant Double a instance Eliminable a => Eliminable (WithDeterminant a) where (WithDeterminant _ m) @@ (i,j) = m @@ (i,j) size (WithDeterminant _ m) = size m swapRows p q (WithDeterminant d m) = WithDeterminant (-d) (swapRows p q m) swapCols p q (WithDeterminant d m) = WithDeterminant (-d) (swapCols p q m) addRow k p q (WithDeterminant d m) = WithDeterminant d (addRow k p q m) addCol k p q (WithDeterminant d m) = WithDeterminant d (addCol k p q m)
As before, once we’ve defined the appropriate instance, the implementation is actually quite easy.
diags :: Matrix -> [Double] diags [] = [] diags (r:rs) = head r : diags (map tail rs) determinant :: Matrix -> Double determinant m = let WithDeterminant d m' = eliminate (WithDeterminant 1 m) in d * product (diags m')
The resulting determinant function actually performs quite well. For example, calculation of the determinant of a 100 by 100 matrix is done in 1.61 seconds, fully interpreted in GHCi. I didn’t bother compiling with optimization to see how well that does, nor replacing the inefficient list-of-lists representation of matrices with one based on contiguous memory or arrays. (Edit: Compiled and optimized with GHC, but still using the list-of-list representation, the time is around a third of a second.)
(It’s worth pointing out that automatic differentiation is another very impressive example of this same technique, except that it uses the standard numeric type classes instead of a custom type class.)
Conclusion
The point of this article is that plain old type classes in Haskell can be used to make your purely functional code very flexible and versatile. By defining a type class to capture the concept of a set of related operations, I was able to achieve:
- Choice of data structures. Had I wanted to use a contiguous array instead of a list of lists, I could have easily done so.
- Easier programming. For example, operating on a submatrix of the original matrix became much easier.
- Flexible code. I was able to get lower triangular matrices, too, without rewriting the code.
- Better composability. I easily reused by upper triangular matrix calculation to find determinants, even though additional calculations had to be traced through the original.
- Separation of concerns. When I started, I never even dreamed that I might need to trace determinant calculations through the process. That got added later on, in it’s own separate bit of code. If someone else wanted different plumbing… say, for logging, or precondition checking, or estimating the possible rounding error, all they’d need to do is define a new instance of the type class.
None of this is new, of course. But type classes are definitely an underused language feature by many Haskell programmers.
A side note: If you compile this code and generate a random 100 x 100 matrix with entries between 0 and 1, as I did, you’ll find the determinant is somewhere in the range of 10 to the 25th power! At first, I thought this was an error. It’s actually correct, though. Think of it this way: the determinant of a 100 x 100 matrix is the sum of 100! (that’s 100 factorial) signed elementary products. Half are positive, and the other half are negative. If you look at the expected value of X^100, where X is a uniformly distributed random variable over (0,1), you get something like 1e-40, a really small number. But, half of 100 factorial is about 5e+157, a really, really big number. Their product is still very large: about 5e+117. The actual determinant the difference between two such numbers, and its expected value (even simplifying as I am with some entirely incorrect assumptions of independence) depends on the variance as well, but it’s indeed quite easy to see how the difference between two numbers this large can be so large; in fact, it’s surprising that it isn’t larger.
Side note. I wonder, what is the most ideomatical code to generate matrix of given size in Haskell…
There is another underused feature you are using, specifically in the definition of eliminate. A hint, if necessary: try commenting out the type signature of eliminate.
You post impressed me so much, that I’ve reimplemented/copied your code, with some niceties like Type Families :)
Wow, that’s very nice. Thank you! The type synonym for elements works out well.
I made a couple changes to your paste; hope you don’t mind. I fixed the transposed instance so that your makeBottomLeftDiagonal works. I also avoided a (swap 0 0) in one place — you either needed to do that, or else change the WithDeterminant instance so it doesn’t negate the determinant when you swap a row or column with itself.
makeDiagonal is still not guaranteed to work for singular matrices, but will work for matrices of full rank. For a counterexample, consider (makeDiagonal $ ListMatrix [[0,0],[2,3]])
Oh, you’re right, greate! Maybe there is a place for this code on Hackage?
makeDiagonal was supposed to solve systems of linear equations. Something like:
solve = diagonal . makeDiagonal
Victor, go ahead. All of the code I post on my blog is in the public domain.
That was gorgeous. I hope you do more like this. | https://cdsmith.wordpress.com/2009/09/20/side-computations-via-type-classes/?like=1&_wpnonce=d1fb2cdf27 | CC-MAIN-2015-32 | en | refinedweb |
This article has been dead for over six months: Start a new discussion instead
vishalonne
Junior Poster in Training
244 posts since Jun 2009
Reputation Points: 12 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
•Community Member
Hello
I am try to learn Brasenham Line Drawing Algorithm. I wrote this program in Turbo C
# include <stdio.h> # include <conio.h> # include <graphics.h> void main() { int dx,dy,x,y,p,x1,y1,x2,y2; int gd,gm,errorcode; clrscr(); printf("\n\n\tEnter the co-ordinates of first point : "); scanf("%d %d",&x1,&y1); printf("\n\n\tEnter the co-ordinates of second point : "); scanf("%d %d",&x2,&y2); dx = (x2 - x1); dy = (y2 - y1); p = 2 * (dy) - (dx); x = x1; y = y1; gd=DETECT,gm,errorcode; initgraph(&gd,&gm,"e:\\tc\\bgi"); errorcode=graphresult(); if(errorcode!=grOk) { printf("%s",grapherrormsg(errorcode)); getch();return; } putpixel(x,y,WHITE); while(x <= x2) { if(p < 0) { x=x+1; y=y; p = p + 2 * (dy); } else { x=x+1; y=y+1; p = p + 2 * (dy - dx); } putpixel(x,y,WHITE); } getch(); closegraph(); }
if I enter coordinates - 180,250,500,600 the output a line going diagnoly downwards, but if I enter 500,600,180,250 i am getting BLANK SCREEN.
Please explain if I'm wrong | https://www.daniweb.com/software-development/c/threads/441546/brasenham-line-drawing-algorithm-problem | CC-MAIN-2015-32 | en | refinedweb |
Computing Mean Absolute Deviation
Summary Statistics offers two methods for computation of mean absolute deviation:
Method VSL_SS_METHOD_FAST is a performance-oriented implementation of the algorithm.
Method VSL_SS_METHOD_FAST_USER_MEAN is an implementation of the algorithm when a user-defined mean is provided.
The calculation is straightforward and follows the pattern of the example below:
#include "mkl_vsl.h" #define DIM 3 /* dimension of the task */ #define N 1000 /* number of observations */ int main() { VSLSSTaskPtr task; float x[DIM][N]; /* matrix of observations */ float mnad[DIM]; MKL_INT p, n, xstorage; int status; /* Parameters of the task and initialization */ p = DIM; n = N; xstorage = VSL_SS_MATRIX_STORAGE_ROWS; /* Create a task */ status = vslsSSNewTask( &task, &p, &n, &xstorage, (float*)x, 0, 0 ); /* Initialize the task parameters */ status = vslsSSEditTask( task, VSL_SS_ED_MNAD, mnad ); /* Compute median absolute deviation in observations */ status = vslsSSCompute(task, VSL_SS_MNAD, VSL_SS_METHOD_FAST ); /* Deallocate the task resources */ status = vslSSDeleteTask( &task ); return 0; }
The size of the array to hold mean absolute deviations should be sufficient to hold at least p elements, where p is the dimension of the task.
Computation of mean absolute deviation is only possible for data arrays available at once, or in separate blocks of the dataset.
To achieve the best results, before you compute mean absolute deviation, provide the buffer for estimate of mean (or corresponding sum) even if you do not need this estimate. | https://software.intel.com/fr-fr/node/497932?language=ru | CC-MAIN-2015-32 | en | refinedweb |
#include <StelLocation.hpp>
Return a short string which can be used in a list view.
Output the location as a string ready to be stored in the user_location file.
Parse a location from a line serialization.
Location/city name.
English country name or empty string.
State/region name (usefull if 2 locations of the same country have the same name).
English planet name.
Longitude in degree.
Latitude in degree.
Altitude in meter.
Light pollution index following Bortle scale.
A hint for associating a landscape to the location.
Population in number of inhabitants.
Location role code C/B=Capital, R=Regional capital, N=Normal city, O=Observatory, L=lander, I=spacecraft impact, A=spacecraft crash. | http://www.stellarium.org/doc/0.10.4/classStelLocation.html | CC-MAIN-2015-32 | en | refinedweb |
No more ({ }) for single-element annotation value, thanks to varargs
By Cheng Fang on Apr 04, 2006
I found this accidentally that I can assign a single value to an array when specifying annotation fields. The other day when I wrote a simple EJB3 testcase, I realized there was a type mismatch: the value of
@Interceptors is declared as
Class[], but I just gave it a single value (
MyInterceptors.class). But surprisingly no complaints from javac!
import javax.ejb.Stateless;
import javax.interceptor.Interceptors;
@Stateless
@Interceptors(MyInterceptors.class)
public class HelloBean {...}
I'm so used to the strong-typing that I couldn't believe this. I thought I was using a wrong version of javax.interceptor.Interceptors.class in an old javaee.jar. I even searched all my classpath for any duplicate jar/class files.
Now with the help of Variable Arguments
(varargs), I can simplify
@Interceptors({MyInterceptors.class}) ==>
@Interceptors(MyInterceptors.class)
@Remote({HelloIF.class}) ==>
@Remote(HelloIF.class)
@Local({HelloIF.class}) ==>
@Local(HelloIF.class)
@EntityListeners({MyListener.class}) ==>
@EntityListeners(MyListener.class)
From a documentation point of view, the simpler form hides the true field type and may cause confusion.
Of course, for multi-element value you still use the old style:
@Interceptors({MyInterceptors.class, HisInterceptor.class})
technorati tags: varargs, annotation, JavaEE | https://blogs.oracle.com/chengfang/entry/p_i_found_this_accidentally | CC-MAIN-2015-32 | en | refinedweb |
IRC log of lld on 2011-03-10
Timestamps are in UTC.
14:53:04 [RRSAgent]
RRSAgent has joined #lld
14:53:04 [RRSAgent]
logging to
14:53:10 [emma]
emma has joined #lld
14:53:13 [antoine]
rrsagent, bookmark
14:53:13 [RRSAgent]
See
14:53:22 [antoine]
zakim, this will be lld
14:53:22 [Zakim]
ok, antoine; I see INC_LLDXG()10:00AM scheduled to start in 7 minutes
14:53:34 [antoine]
Meeting: LLD XG
14:53:46 [antoine]
Chair: Tom
14:54:18 [antoine]
Agenda:
14:54:57 [antoine]
Previous: 2011-03-03 -
14:55:55 [Zakim]
INC_LLDXG()10:00AM has now started
14:56:02 [Zakim]
+[IPcaller]
14:56:10 [antoine]
zakim, IPcaller is me
14:56:10 [Zakim]
+antoine; got it
14:56:57 [antoine]
Regrets: kai, joachim, jodi, uldis, kim, felix, lars
14:57:15 [antoine]
rrsagent, please make record public
14:57:27 [antoine]
rrsagent, please draft minutes
14:57:27 [RRSAgent]
I have made the request to generate
antoine
14:57:32 [Zakim]
+??P7
14:57:47 [antoine]
zakim, ??P7 is TomB
14:57:47 [Zakim]
+TomB; got it
14:58:02 [Zakim]
+ +33.1.53.79.aaaa
14:58:06 [Zakim]
+ +1.614.764.aabb
14:58:18 [antoine]
zakim, aabb is jeff__
14:58:18 [Zakim]
+jeff__; got it
14:58:36 [GordonD]
GordonD has joined #lld
14:58:57 [jeff__]
zakim, mute me
14:58:57 [Zakim]
jeff__ should now be muted
14:59:37 [Zakim]
+[IPcaller]
14:59:48 [antoine]
zakim, IPcaller is GordonD
14:59:48 [Zakim]
+GordonD; got it
14:59:53 [ww]
ww has joined #lld
15:00:06 [michaelp]
michaelp has joined #lld
15:00:12 [pmurray]
pmurray has joined #lld
15:00:19 [kefo]
kefo has joined #lld
15:00:22 [marcia]
marcia has joined #lld
15:00:23 [Zakim]
+[LC]
15:00:24 [kefo]
zakim, LC is me
15:00:24 [Zakim]
+kefo; got it
15:00:40 [rsinger]
rsinger has joined #lld
15:00:45 [Zakim]
+jeff__.a
15:00:52 [TomB]
Scribe: kefo
15:00:57 [TomB]
Scribenick: kefo
15:00:58 [kefo]
zakim, mute m
15:00:59 [Zakim]
michaelp should now be muted
15:01:00 [kefo]
zakim, mute me
15:01:01 [Zakim]
kefo should now be muted
15:01:07 [Zakim]
+ +1.330.289.aacc
15:01:27 [Zakim]
+ +1.423.463.aadd
15:01:39 [marcia]
zakim, mute me
15:01:40 [antoine]
zakim, aadd is rsinger
15:01:47 [Zakim]
marcia should now be muted
15:01:51 [Zakim]
+rsinger; got it
15:02:03 [Zakim]
+??P26
15:02:08 [ww]
zakim, ??P26 is me
15:02:09 [antoine]
zakim, ??P26 is ww
15:02:11 [Zakim]
+ww; got it
15:02:13 [Zakim]
I already had ??P26 as ww, antoine
15:03:07 [rsinger]
google talk is currently offering free calls to the US, btw
15:03:35 [kcoyle]
kcoyle has joined #lld
15:03:47 [Zakim]
+??P28
15:03:58 [pmurray]
zakim ??P28 is me
15:04:01 [Zakim]
+??P0
15:04:05 [ww]
Zakim, mute me
15:04:05 [Zakim]
ww should now be muted
15:04:16 [pmurray]
zakim, ??P28 is me
15:04:16 [Zakim]
+pmurray; got it
15:05:17 [ww]
rsinger: good to know... in my office now, i would guess that the u of edinburgh has a good bulk ld deal so hopefully i won't bankrupt them :)
15:05:20 [kefo]
TOPIC: Admin
15:05:20 [emma]
rrsagent, plese draft minutes
15:05:20 [RRSAgent]
I'm logging. I don't understand 'plese draft minutes', emma. Try /msg RRSAgent help
15:05:30 [emma]
rrsagent, please draft minutes
15:05:30 [RRSAgent]
I have made the request to generate
emma
15:05:47 [kefo]
TomB: proposes accepting mtg minutes
15:05:51 [ww]
+1
15:06:43 [emma]
q+
15:06:50 [TomB]
ack emmanuelle
15:06:53 [kefo]
[ I missed a lot of that - noise in the room on my end
15:07:09 [ww]
an hour earlier is actually better for me :)
15:08:03 [emma]
I won't chair on march 24th, Antoine will
15:08:21 [kefo]
TomB: Emma won't chair on 24 march, Antoine will
15:08:29 [kefo]
... moving on to asia pacific telecon
15:09:11 [kefo]
... it's late for many, but happy to accommodate asia pacific participants and thanks to those joining from US and Europe
15:09:17 [kefo]
... Can we identify a scribe?
15:10:42 [kefo]
... Also, goals:).
15:10:51 [kefo]
... Will be a informal call.
15:11:03 [kefo]
... Do others have suggestions or comments on this plan?
15:11:07 [antoine]
sounds good!
15:11:34 [Zakim]
-pmurray
15:11:38 [kefo]
... I'll try to confirm moving the call an our earlier.
15:11:43 [kefo]
TOPIC: Final report draft
15:11:46 [Zakim]
+ +1.614.372.aaee
15:11:58 [kefo]
TomB: About the executive summary.
15:11:59 [antoine]
zakim, aaee is pmurray
15:12:00 [Zakim]
sorry, antoine, I do not recognize a party named 'aaee'
15:12:21 [kefo]
... Benefits: Emmanuelle and Ed are working on benefits. Would either like to comment?
15:12:29 [kefo]
Emma: Not started yet, personally.
15:12:54 [kefo]
Are these actions or topics?
15:13:02 [kefo]
TOPIC: Use case and requirements
15:13:10 [TomB]
15:13:11 [kefo]
Can topics be "continued?
15:13:42 [kefo]
How to is not the problem. .... Thanks emma
15:13:53 [emma]
ACTION: emma and ed to start curating a section on benefits of LLD for libraries [recorded in
]
15:13:59 [emma]
-- continues
15:13:59 [kefo]
ACTION: Use cases and requirements (represented via clusters, plus an annotated list of use cases, plus requirement list?)
15:14:30 [kefo]
TomB: We'll have a separate report on use cases. Not too long, but enough.
15:15:00 [kefo]
... Would anyone like to edit this section of hte report?
15:15:23 [kcoyle]
q+
15:15:34 [kefo]
... You do get to place your name on separate sections (as "editor"), which may be attractive if anyone needs to demonstrate impact of participating in this group/
15:15:46 [emma]
q-
15:16:27 [kefo]
kcoyle: Do we really need separate documents (one for hte Use Cases)?
15:16:52 [kefo]
... the clusters have been distilled. Perhaps we just need a wiki page to point . I feel we've done this already.
15:16:57 [emma]
a report makes it more official for dissemination ?
15:17:10 [kefo]
TomB: I think we have to. But I'd like to formalize it a little. It does not need to be complicated.
15:17:24 [antoine]
q+
15:17:25 [kefo]
kcoyle: I don't see it as a "document" but a "wiki" page because I'd want it linked.
15:17:28 [marcia]
Antoine: What did the SKOS do for the usecases?
15:17:33 [emma]
ack kc
15:17:35 [kefo]
TomB: I see. No a wiki page is fine. It does not need to be offline.
15:17:50 [TomB]
ack antoine
15:18:03 [kefo]
Antoine: I'd like to comment, also, Marcia asked a question about UC in SKOS.
15:18:52 [kefo]
... We took some of hte Use cases in SKOS and that document linked to other wiki pages . So it was a mix between placing some content in a document and placing some in a wiki.
15:19:52 [TomB]
Example of archived wiki page:
15:19:52 [kefo]
... Regardin Karen's suggestion: A wiki can be edited, making it dynamic, and the W3C cannot archive a Wiki in quite the same way as a "document." They're are labelled as "archived" and no longer actively maintained.
15:20:29 [kefo]
TomB: Here is an example of a frozen wiki page:
15:20:47 [kefo]
TomB: Not going to resolve this unless we have avolunteer.
15:20:51 [kefo]
--continues
15:21:13 [kefo]
-- continnues
15:21:15 [kefo]
-- continues
15:21:18 [kefo]
I give mup/
15:21:45 [kefo]
ACTION: ACTION: Uldis and Jodi to create social uses cluster
15:21:50 [kefo]
-- continues
15:22:00 [emma]
rrsagent, please draft minutes
15:22:00 [RRSAgent]
I have made the request to generate
emma
15:22:04 [kefo]
zakim, unmute me
15:22:04 [Zakim]
kefo should no longer be muted
15:22:40 [emma]
Kefo : sent an email to the list that completes the action
15:22:45 [antoine]
15:22:58 [TomB]
15:23:33 [kefo]
ACTION: Kevin and Joachim to review content of existing clusters to see where the web service dimension could be strengthened.
15:23:34 [kefo]
done
15:23:47 [emma]
--done
15:24:23 [kefo]
zakim, mute me
15:24:23 [Zakim]
kefo should now be muted
15:24:46 [ww]
Zakim, unmute me
15:24:46 [Zakim]
ww should no longer be muted
15:24:50 [kefo]
ACTION: Available data (vocabularies, datasets) (Antoine and Jeff)
15:24:57 [kefo]
- continues
15:25:10 [kefo]
TomB: We need to start closing some of these open actions.
15:25:17 [emma]
+1 for closing the action
15:25:19 [antoine]
+1
15:25:35 [ww]
Zakim, mute me
15:25:35 [Zakim]
ww should now be muted
15:25:47 [antoine]
ACTION: Volunteers to send login information (openid credentials) to William Waite to curate LLD group on CKAN [recorded in
]
15:25:49 [antoine]
--done
15:25:55 [kefo]
TomB: Main point of call. Gordon's analysis/
15:26:23 [antoine]
TOPIC: PROBLEMS / LIMITATIONS / ISSUES - SECTION IN REPORT
15:26:25 [kefo]
GordonD: Let's concentrate on sections 1 and 3. Section 2 (granularity) can probably be incporated into problems and limitations.
15:26:57 [antoine]
->
15:27:02 [kefo]
... we'll being with section 1: Issues for further discussion
15:27:26 [kefo]
... benefits of "Constrained versus unconstrained properties and classes"
15:27:34 [kefo]
... there's been recent disucssion about this on the list.
15:27:44 [GordonD]
15:27:51 [kefo]
... Tom summarized the discussion nicely.
15:28:24 [kefo]
... If we replace direct references to FRBR to something more generic like "library standards" we can get something out of this page,
15:29:03 [kefo]
... Are there any comments on the pros and cons of "Constrained versus unconstrained properties and classes"?
15:29:21 [kefo]
TomB: Are you saying that both are needed? That if you do not have "constrained" properties you will lose information?
15:29:36 [kefo]
GordonD: Yes, that is what I'm saying.
15:29:49 [LarsG]
LarsG has joined #lld
15:29:52 [emma]
q+ to suggest getting inspiration from DC
15:29:58 [kcoyle]
q+
15:30:09 [kefo]
TomB: The value of constrained properties allows for inferencing of more knowledge.
15:30:23 [Zakim]
+[IPcaller]
15:30:34 [LarsG]
zakim, IPcaller is me
15:30:34 [Zakim]
+LarsG; got it
15:30:42 [LarsG]
zakim, please mute me
15:30:42 [Zakim]
LarsG should now be muted
15:30:47 [TomB]
ack emma
15:30:47 [Zakim]
emma, you wanted to suggest getting inspiration from DC
15:30:50 [kefo]
GordonD: Yes. I take the point of unrestrained properties, and I've worked to find a middle ground with some groups.
15:31:29 [kefo]
Emma: I think we can look at DC, where the elements were unconstrained versus later definitions where some ranges are applied.
15:32:02 [kefo]
... Some use the elements because they are unconstrained, others use the otheres. But, still, many do not recognize the distinction.
15:32:31 [jeff__]
q+
15:32:39 [kefo]
GordonD: I agree. People want and require guidance on this. How does someone choose a set of classes and properties from namespaces. It might be obvious to us, but not others.
15:33:00 [antoine]
q+
15:33:03 [emma]
s/otheres/constrained terms
15:33:06 [kefo]
.. I see a more general guidance piece coming out of this that addresses the mixing and matching and the choices implementers have/
15:33:26 [TomB]
ack kcoyle
15:34:21 [kefo]
kcoyle: I'm going to question this. I see a far amount of guidance in the unconstrained properties. Take, for example, Work Title. Must that be constrained to FRBR Entity - it already has a clear meaning? It is defined inepdenently.
15:34:42 [TomB]
q+ to point out that SKOS has both constrained and unconstrained properties. The question is: which properties need to be constrained? Hopefully no more than necessary.
15:34:51 [emma]
+1, Karen : guidance & data constraints are 2 different things
15:34:56 [kefo]
... Some analysis should be done. Some *need* to be constrained to have meaning. But others do not.
15:34:59 [antoine]
+1
15:35:04 [michaelp]
+1
15:35:05 [rsinger]
+1
15:35:08 [kefo]
... I see constraints as overkill.
15:35:10 [TomB]
q?
15:35:28 [michaelp]
q+
15:35:37 [kefo]
GordonD: I disagree. What will happen people will look at hte documentation and will choose a property based on the definition and not its context.
15:36:36 [kefo]
kcoyle: That argues for entity constraints on everything in the sem web.
15:36:47 [kefo]
GordonD: Library data is particularly semantically rich.
15:36:59 [kefo]
kcoyle: I don't know if it is that much different than other data.
15:37:15 [jeff__]
ack me
15:37:16 [TomB]
ack jeff__
15:37:21 [kefo]
... I don;t see the problem, the need to constrain.
15:37:44 [kefo]
jeff_: I tend to agree with GordonD. The constraints help to tell me what they mean (not just how to use them).
15:38:13 [kcoyle]
or subclassed to rda without constraints, as in the registry
15:38:30 [kefo]
... They provide a level of confidence in interoperability. But you can achieve this my constraining the FRBR ontology but sub-classing FRBR classes/properties to DC, for example.
15:38:40 [kefo]
GordonD: Yes. That is the middle ground.
15:38:46 [emma]
s/my/by
15:39:05 [jeff__]
zakim, mute me
15:39:05 [Zakim]
jeff__ should now be muted
15:39:16 [TomB]
ack antoine
15:39:23 [kefo]
... Use contrained versions where possibe and suitable to protect against data loss, but unconstrained when it matters less.
15:39:34 [kefo]
Antoine: I feel a little uncomfortable with constrained as well.
15:39:48 [jeff__]
q+
15:41:03 [kefo]
... We should be careful about the granularity of hte semantics we want with these constraints.
15:41:09 [jeff__]
I agree with Antoine on the point of overconstraint
15:41:17 [kcoyle]
Here is a place to see RDA properties by entity:
15:41:52 [kefo]
... A benefit of constraints is that I can *infer* knowledge. But, this can be remedied with more expressing facts more explicitly.
15:42:43 [rsinger]
+1
15:42:46 [TomB]
ack TomB
15:42:47 [Zakim]
TomB, you wanted to point out that SKOS has both constrained and unconstrained properties. The question is: which properties need to be constrained? Hopefully no more than
15:42:49 [Zakim]
... necessary.
15:42:50 [kefo]
... Finally, a practical addition to argument against constraints: you are adding many elements to your namespace.
15:43:39 [kefo]
TomB: When defining SKOS we wanted to keep it as simple as possible. So, some have domains, but others do not. Labelling properties are not restricted only to Concepts. I can be a "preferred label" for abnything you want to use it for.
15:44:05 [kefo]
... We were cautious about restricting domains and ranges in order to facilitate adoption and use.
15:44:20 [kcoyle]
q+
15:44:43 [ksclarke]
ksclarke has joined #lld
15:44:46 [kefo]
... If you mechanically replicate properties and classes for *everything* it can lead to a proliferation of classees and properties. Perhaps the constraints should only be used prudently and carefully.
15:44:53 [TomB]
ack michaelp
15:45:11 [ksclarke]
ksclarke has left #lld
15:45:22 [kefo]
michaelp: My comments follow along the lines of Antoine's and Tom's.
15:47:01 [kefo]
... Constraints tend to be used to specify semantically what we mean. We should be careful about what constraints here mean. In OWL, constraints can negatively impact interoperability because of inconsistency.
15:47:35 [kefo]
... OWL makes assumptions about hte entities based on the properties. It's not *meaning* but "inferencing."
15:47:39 [rsinger]
i completely agree with this
15:47:43 [jeff__]
ack me
15:47:57 .
15:48:05 [LarsG]
+1 for what michaelp said
15:48:14 [emma]
+1, michaelp
15:48:19 [antoine]
q+ for proposing a modelling exercise
15:48:22 [kefo]
jeff_: I appreciate constraints when they make sense.
15:48:30 [ww]
validation vs. inference -- validation means applying inference rules to exhaustion and not entailing a contradiction (modulo cardinality and such which dodn't work well)
15:48:40 [TomB]
q?
15:48:49 [LarsG]
s/dodn't/didn't/
15:48:53 [TomB]
ack kcoyle
15:49:06 [jeff__]
zakim, mute me
15:49:06 [Zakim]
jeff__ should now be muted
15:49:27 [kefo]
karen: I want to clarify a couple of things. There are alot of levels between constrained and unconstrained RDA.
15:49:46 [kefo]
... We should consider that *some* require constraints. Therefore, not an all or nothing view.
15:50:06 [kefo]
.. People are concerned about WEMI Group 1, but less so Group 2 & 3.
15:50:15 [kefo]
.. People are concerned about the constraints on WEMI Group 1, but less so Group 2 & 3.
15:50:31 [kefo]
... We should consider constraints applied to application profiles.
15:51:26 [kefo]
GordonD: Communities have invested huge effort into these models. They're well-defined and structured. i"m a little surprised that such rich models are not being welcomed as much as I would have expected.
15:52:10 [kefo]
... We're trying to get general points out of this discussion. The details about the constraints on WEMI, for example, are us talking about the trees and missing the forest.
15:52:26 [TomB]
ack antoine
15:52:26 [Zakim]
antoine, you wanted to discuss proposing a modelling exercise
15:52:51 [kefo]
Antoine: Agree with Gordon. We could be talking about any model., not just a FRBR one.
15:54:20 [kefo]
... Could we continue this discussion by a type of modeling exercise? Taking the name and consider its modeling with properties versus classes.
15:54:36 [kcoyle]
i would like to see properties v. classes modeled
15:54:48 [kefo]
GordonD: I think this is a good proposal.
15:54:52 [rsinger]
maybe in an 88 post email thread ;)
15:55:02 [antoine]
:-D
15:55:07 [kcoyle]
:-)
15:55:13 [kefo]
TomB: Gordon, bring us home...
15:55:28 [kefo]
GordonD: Application profiles, OWL ontologies.
15:55:37 [LarsG]
perhaps we could just have it as an open issue in the final report...
15:55:39 [kefo]
... Which might be better?
15:56:02 [kcoyle]
is this a matter for our report, or a question we want to incubate?
15:56:11 [kefo]
... Perhaps it would be best to outline the pros and cons to each, which would touch on the constrained versus unconstrained issue.
15:57:06 [TomB]
q+ to scope what LLD XG can say and what we can identify as a problem
15:57:09 [kefo]
... There's little agreement about hte *best* approach, but we could provide guidance by outlining the options.
15:57:18 [TomB]
ack TomB
15:57:18 [Zakim]
TomB, you wanted to scope what LLD XG can say and what we can identify as a problem
15:57:49 [kcoyle]
+1
15:58:05 [TomB]
ack kcoyle
15:58:15 [kefo]
TomB: I think it's great if we can make some progress on this topic by looking at examples, but it might be unrealistic to provide solutions versus identifying the problem. We nee to be realistic about what we can do, especially in the time reamining.
15:58:37 [kefo]
GordonD: Quickly to section 3, linked data and legacy records
15:59:12 [kefo]
... In many ways this is the flip side of what we were just talking about. Libraries are sitting on mounds of data. Many are beginning to see how opening this up would be beneficial.
15:59:18 [kcoyle]
q+
15:59:29 [TomB]
ack kcoyle
15:59:30 [kefo]
... We've had a number of discussions about this and I think we can bring some of these issues in.
15:59:37 [kefo]
... Do others have something to say?
16:00:05 [rsinger]
q+
16:00:09 [kefo]
kcoyle: I think legacy data nad the constraint issue come together. Hard to move data into a constrained model.
16:00:59 [kefo]
GordonD: I actually see the existence of constrained properties assisting with providing additional value to legacy data.
16:02:09 [kefo]
... for example, one could output standard MARC records to ISBD, as an initial step, and then, using property/class relationships, move to other namespaces, finally ending on a more FRBR model. But I just thinking out loud.
16:02:13 [TomB]
ack rsinger
16:02:38 [kcoyle]
and remember that there is a lot of non-library bibliographic dta
16:02:43 [kcoyle]
s/dta/data
16:02:53 [kefo]
rsinger: Not seeing how we will bridge the gap between current models/formats and a future one.
16:03:31 [kefo]
GordonD: We do the best we can. history has a way of working these things out.
16:03:38 [rsinger]
fair enough
16:03:58 [antoine]
++ for optimistic observation as closing remark :-)
16:03:58 [kefo]
TomB: We need to adjourn. I look forward to talking to others tomorrow to talk about problems and issues.
16:04:08 [Zakim]
-michaelp
16:04:09 [Zakim]
-jeff__
16:04:09 [Zakim]
-marcia
16:04:11 [Zakim]
-kcoyle
16:04:11 [LarsG]
bye
16:04:12 [ww]
thanks !
16:04:12 [kefo]
.. Mtg adjourned
16:04:14 [Zakim]
-GordonD
16:04:15 [Zakim]
-ww
16:04:17 [Zakim]
-rsinger
16:04:21 [kefo]
zakim, unmute me
16:04:21 [Zakim]
kefo should no longer be muted
16:04:23 [Zakim]
-LarsG
16:04:25 [TomB]
zakim, who is on the call?
16:04:25 [Zakim]
On the phone I see antoine, TomB, emma, kefo, pmurray (muted)
16:04:28 [antoine]
zakim, please list attendees
16:04:28 [Zakim]
As of this point the attendees have been antoine, TomB, +33.1.53.79.aaaa, +1.614.764.aabb, emma, jeff__, GordonD, kefo, michaelp, +1.330.289.aacc, marcia, +1.423.463.aadd, rsinger,
16:04:32 [Zakim]
... ww, kcoyle, pmurray, +1.614.372.aaee, LarsG
16:04:37 [Zakim]
-pmurray
16:04:39 [antoine]
rrsagent, please draft minutes
16:04:39 [RRSAgent]
I have made the request to generate
antoine
16:07:01 [Zakim]
-kefo
16:07:06 [michaelp]
michaelp has left #lld
16:07:44 [TomB]
16:07:54 [antoine]
rrsagent, please draft minutes
16:07:54 [RRSAgent]
I have made the request to generate
antoine
16:20:08 [Zakim]
-emma
16:20:13 [Zakim]
-antoine
16:20:17 [Zakim]
-TomB
16:20:19 [Zakim]
INC_LLDXG()10:00AM has ended
16:20:20 [Zakim]
Attendees were antoine, TomB, +33.1.53.79.aaaa, +1.614.764.aabb, emma, jeff__, GordonD, kefo, michaelp, +1.330.289.aacc, marcia, +1.423.463.aadd, rsinger, ww, kcoyle, pmurray,
16:20:22 [Zakim]
... +1.614.372.aaee, LarsG
16:20:29 [pmurray]
pmurray has left #lld
18:14:14 [Zakim]
Zakim has left #lld | http://www.w3.org/2011/03/10-lld-irc | CC-MAIN-2015-32 | en | refinedweb |
Cloud Security
Crypto Services and Data Security in Microsoft Azure
Jonathan Wiggs
Many early adopters of Microsoft Azure still have a lot of questions about platform security and its support of cryptography. My hope here is to introduce some of the basic concepts of cryptography and related security within Azure. The details of this topic could fill whole books, so I am only intending to demonstrate and review some of the cryptography services and providers in Azure. There are also some security implications for any transition to Azure.
As with any new platform or service delivery method, you’ll be faced with new challenges. You’ll also be reminded that some of the classic problems still exist and even that some of the same solutions you’ve used in the past will still work very well. Any application engineer or designer should think about this topic as it relates to the kind of data you may be storing as well as what you need to persist. Combine this into a methodical approach and you and your customers will be well-served.
So why would I think this information is needed within the developer community? Over the last several months I’ve seen an increasing number of posts on the community sites regarding security in general with Azure. Microsoft has suggested encryption as part of securing application-layer data with Azure projects. However, proper understanding of both encryption and the .NET security model will be needed by product designers and developers building on Azure.
One thing I noticed was an increasing percentage of posts specific to crypto services and key storage. This was especially true with regards to Azure Storage services. It got my own curiosity going, and I discovered it was a worthy topic to discuss in some depth.
During the course of this article, I’ll be making heavy use of Cryptographic Service Providers (CSPs), which are implementations of cryptographic standards, algorithms and functions presented in a system program interface. For the purposes of this article I’ll be using the symmetric encryption algorithm provided by the Rijndael cryptography class.
Crypto Basics
The Microsoft Azure SDK extends the core .NET libraries to allow the developer to integrate and make use of the services provided by Azure. Access to the CSPs has not been restricted within. I’ll discuss key and secret data persistence a little later in this article.
You also have access to the full array of cryptographic hash functionality in Azure, such as MD5 and SHA. These are vital to enhance the security of any system for things such as detecting duplicate data, hash table indexes, message signatures and password verification.
A consistent recommendation is to never create your own or use a proprietary encryption algorithm. The algorithms provided in the .NET CSPs are proven, tested and have many years of exposure to back them up. Using XOR to create your own cipher process is not the same, and does not provide the same level of data security.
A second recommendation is to use the RNGCryptoServiceProvider class to generate random numbers. This ensures random numbers generated by your application will always have a very high level of entropy, making it hard to guess at the patterns.
The code below implements a single static member that returns a 32-bit int value that is random and meets the requirements to be cryptographically secure. This is made possible by using the byte generator in the RNGCryptoServiceProvider found in the Cryptography namespace:
Figure 1 shows a simple example of using the CSPs within Azure. Three public members are exposed for use within any Azure application. The first accepts a binary key and initialization vector (IV), as well as a binary buffer of unencrypted data and returns its encrypted equivalent. The second member does the reverse by decrypting the data. The third member returns the calculated hash value for that data. Notice here that I’m using the Rijndael CSP for managed access to a provider. I’m also storing data and keys in binary buffers and writing over them as soon as I’m finished with them. I’ll touch on this topic later when I discuss immutability.
Figure 1 Simple Encryption
public static byte[] SampleEnc.CreateEncryptor(), CryptoStreamMode.Write); EncryptionStream.Write(dataBuffer, 0, dataBuffer.Length); EncryptionStream.Close(); byte[] ReturnBuffer = InMemory.ToArray(); return ReturnBuffer; }
This is the simplest example of encrypting data and returning the encrypted results as a byte array. This is not code that should be used in a secure environment without all the proper security analysis, only an example.
The example in Figure 2 has an almost identical structure to the one in Figure 1. In this case, I’m decrypting data based on the same key and IV, only with an encrypted byte buffer as a parameter. The only real difference here is that when I create the encryption stream, I specify that I’m creating a symmetric decryptor and not an encryptor as I did previously.
Figure 2 Simple Decryption
public static byte[] SampleDec.CreateDecryptor(), CryptoStreamMode.Write); EncryptionStream.Write(dataBuffer, 0, dataBuffer.Length); EncryptionStream.Close(); byte[] ReturnBuffer = InMemory.ToArray(); return ReturnBuffer; }
Key Storage and Persistence
As with any encryption strategy at the application or enterprise layer, the encryption and decryption infrastructure is less than half the battle. The real problem comes with key storage and key persistence. The data security provided by encrypting data is only as secure as the keys used, and this problem is much more difficult than people may think at first. Systems I’ve reviewed have stored crypto keys everywhere, from directly in source code, to text files named something clever, to flat files stored in hard-to-find directories.
An important question of key persistence comes about when considering where to store and keep keys in a cloud environment. Some people have expressed concern that by persisting keys in the cloud you’re exposing yourself to a security threat from the cloud itself. That is, if someone can get physical access to your data, data stored on disk may not be encrypted by default (as is the case with Azure). Considering that SQL Azure does not yet support encryption either, this becomes a security decision to be considered in the planning and design of your solution. As with any security implementation, the risks must be measured, weighed and mitigated.
But that doesn’t mean cloud platforms in general—and Azure in particular—are inherently not secure. What other options may be available to you?
One thing to note right away is that no application should ever use any of the keys provided by Azure as keys to encrypt data. An example would be the keys provided by Azure for the storage service. These keys are configured to allow for easy rotation for security purposes or if they are compromised for any reason. In other words, they may not be there in the future, and may be too widely distributed.
Storing your own key library within the. This is fairly straightforward to implement. For example, say you wanted to implement your own key library as a simple text file to persist some secret information. This would be best stored as data in the blob service API as opposed to either the queue or table storage service. The blob area of the storage service is the best place for data such as binary audio and images or even text files. The queue portion of the service is focused on secure messaging for small data objects that do not persist for long periods of time. The table storage system is great for structured data and information that needs to be persisted and accessed in specific parts, identical to relational data in a database.
You start by persisting a key in a CSP key container. This is a great option for storing a public key that is difficult to retrieve without physical access to the server. With Azure, where the location of applications and data is abstracted, this would make even a public key stored in this manner extremely difficult to find and retrieve. The creation of a key storage container is very simple; here is an example using the RSA provider that creates our key. If the key container already exists, its key is loaded into the provider automatically:
There are also other options you can consider based on your needs. For example, you can use specific flags to secure the key to the user that created the container. This can be done with the use of CspParameters flags member:
Now create a request to the blob API using your Azure storage key. The request itself requires both a signature string as well as a proper request header. The proper header format is:
In this case, I want to maximize the security of my persisted secret data, so I’ll use the SharedKey authorization method. The signature portion of the header is a hash-based authentication code that is generated by the SHA256 algorithm and your storage key against the data in the signature. This hash is then encoded into a base64 string. A sample signature might look like this:
As described earlier, I would then generate the base64 encoded hash and use that in the header as the signature. This key file then could only be accessed by those who have an application that runs in your application space in the Azure cloud with access to your storage keys. So with key persistence, you can either manage the keys outside the Azure framework or inside the cloud itself.
Key and Security Threats
One item worth covering at least briefly is key security. This is a slightly different topic than how you persist and store keys. Keys themselves are essentially strings of characters that have a very high level of entropy, meaning an extremely high level of randomness. In fact, this can lead to a common attack process to find keys within a system. For instance, if you take a dump of memory or an area of data on a hard disk, the areas of extremely high entropy are great places to start mining for keys.
Apart from choosing good security practices based on the needs of your application and securing your data, how else can you protect yourself?.
Finally, invest time in diagramming the flow of your data, both secure and unsecure. Take a look at where your data goes and how, where you store secrets, and especially where your data crosses boundaries such as public and private networks. This will give you a good idea of where your data is exposed, and allow you to target those risks with plans for mitigating them in a straightforward manner.
A related question I’ve been asked is whether Azure supports SSL. The short answer to this is yes! Azure would not be a very capable cloud platform for Web-based services and applications without support for SSL.
Encryption with SQL Azure
The release of SQL Server 2008 introduced a new feature: transparent data encryption (TDE). For the first time, SQL Server can encrypt its data fully with very little effort needed beyond what was required for the limited encryption available in SQL Server 2005. However, the initial version of SQL Azure storage does not yet support database-level encryption, though it’s a feature being considering for a future version. It should be noted that SQL Azure is only available via port 1433 and only via TCP connections; it currently cannot be exposed on other ports.
Even though this feature is not yet integrated into Azure, there are several security features of SQL Azure that the developer or designer should keep in mind. First of all, SQL Azure supports the tabular data stream (TDS). This means you can for the most part connect and interact with the database just like you’ve always done. Taking advantage of ADO.NET encryption and trusted server certificates is definitely worth considering, especially when accessing your SQL Azure database from outside the cloud.
The connection properties Encrypt=True and TrustServerCertificate = False, in the proper combination, will ensure data transmission is secure and can help prevent man-in-the-middle attacks. This is also a requirement for connecting to SQL Azure—you cannot connect to SQL Azure unless connection-level encryption has been turned on.
The second security feature of SQL Azure you should familiarize yourself with is the SQL Azure firewall. This tool will be very familiar to those who have used local software firewalls or even SQL Server security surface-area toolsets..
As with any implementation of SQL Server, user account management is another aspect that must be tightly controlled. The firewall within SQL Azure is indeed a great tool, but it should not be relied on by itself. User accounts with strong passwords and configured with specific rights should be used as well to complement your data security model.
These new tools go a long way toward making SQL Azure a very tightly secured managed platform for cloud-based applications. If you’re trying this service out for the first time, remember that before you can connect, you must initially configure the SQL Azure firewall. This must first be done through the SQL Azure Web portal, but can be done later directly in the master database as described earlier.
Immutability and In-Memory Resources
Immuta-what? Immutability in object-oriented programming simply means that the object’s state cannot be modified after its initial creation. A concrete example in the Microsoft .NET Framework is the string class. When the value of a string is changed in code, the original string in memory is simply abandoned and a new string object is created to store the new value.
Why is this important from a security perspective? Well, that string may stay in memory for as long as the server is online without a reboot. You really have no way of knowing with certainty how long a string will stay in memory. This is important when considering how to store information in code such as cryptographic keys or copies of encrypted and decrypted data. By leaving a trail of that data behind you in memory, you leave behind information that exposes your secrets to the clever data thief.
Because of this vulnerability, it is always recommended that such data be stored in buffers such as byte arrays. That way, as soon as you’re done with the information, you can overwrite the buffer with zeroes or any other data that ensures the data is no longer in that memory.
Because Azure is a cloud environment I’ve been asked if this is a still a concern, and it’s a good question. True, in the Azure system individual applications are isolated from each other. This makes exposing data in memory much less of an issue in general. It would be very difficult to associate applications and memory space in the cloud. However, I still recommend the cautious approach and cleaning up after yourself. You may not always run this piece of code in the cloud, and other vulnerabilities may expose themselves in the future. While less of a concern, keep this habit, and persist this approach.
In Figure 3 I’ve modified the previous example that generated random integers. Here I added a bit of error-handling to ensure that I have a finally block that always runs, no matter what. Within that block I am doing a very simple iteration through the values in the byte array, overwriting each position with a zero. This overwrites that data in memory because byte arrays are mutable. I know that this number is no longer in memory owned by the execution of this member. This can be done to any byte array used as a data buffer for items such as keys, initialization vectors, and encrypted or decrypted data.
Figure 3 Clearing Data from Memory
public static int GenerateRandomNumber() { byte[] GeneratedBytes = null; try { GeneratedBytes = new byte[4]; RNGCryptoServiceProvider CSP = new RNGCryptoServiceProvider(); CSP.GetBytes(GeneratedBytes); return BitConverter.ToInt32(GeneratedBytes, 0); } finally { for (int x = 0; x < GeneratedBytes.Length; x++) { GeneratedBytes[x] = 0; } } }
Message Queues
Azure queues provide a similar set of functionality to the Microsoft Message Queuing (MSMQ) services that are common to enterprise Windows applications. The message queue service within Azure stores text-based messages no larger than 8 KB in a first-in, first-out (FIFO) manner. This allows services and applications running on different servers—or in this case within the cloud—to interact and send actionable messages to each other in a secure and distributed manner.
There are five basic functions that allow you to push a message to the queue, peek a message, pull a message and so on. The question that has come up most often is, how secure are these messages?
Many of the features currently supported by MSMQ are not yet supported within the Azure messaging APIs. However, there are similarities. As with the blob data service, the messaging services makes use of the same REST get and put interfaces. Writing and reading messages can be done either in code or with a URI and Web request calls that can be encrypted via SSL for requests over unsecured networks. This means transmission of requests is encrypted.
Also, as with the other storage services within Azure, any access to a message queue must make use of the same storage service key. Only applications with access to the key can view or add messages to these queues. This makes any additional encryption of these messages overkill unless the body of these messages is going to be leaving the secured network or secured application space.
Wrapping It All Up
In today’s drive toward service-oriented architecture and solutions, few can consider doing business without cloud applications. The isolation of data and services in a multi-tenant environment such as Azure is one of the major concerns of anyone who has an eye toward using private data.
As with any new platform, security and cryptography features will continue to evolve in Microsoft Azure. Microsoft has taken great pains to not only provide a secure, isolated environment, but also to expose what it has done to allow for public certification of these measures. This should give engineers confidence that Microsoft wants to be a closer partner on security and keeping systems and applications locked down. I’ve continued to emphasize the need of constant evaluation of security and cryptographic requirements through this article. That is essential to ensuring these tools can be used effectively to make your cloud system secure and to protect your data.
Jonathan Wiggs is currently a principal engineer and development manager at Nuance Communications Inc. Read his blog at jonwiggs.com or contact Wiggs directly at Jon_Wiggs@yahoo.com.
MSDN Magazine Blog
More MSDN Magazine Blog entries >
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/magazine/ee291586.aspx | CC-MAIN-2015-32 | en | refinedweb |
Mapping Corporate Twitter Account Networks Using Twitter Contributions/Contributees API Calls
Savvy users of social networks are probably well-versed in the ideas that corporate Twitter accounts are often “staffed” by several individuals (often identified by the ^AB convention at the end of a tweet, where AB are the initials of the person wearing the that account hat (^)); they may also know that social media accounts for smaller companies may actually be operated by a PR company or “social media guru” who churns out tweets their behalf via Twitter accounts operated in the company’s name and in support of it’s online marketing activity.
Rooting around the Twitter API looking for something else, I spotted a GET users/contributees API cal, along with a complementary GET users/contributors call that return “an array of users (i.e. Twitter accounts) that the specified user can contribute to”, and the accounts that can contribute to a particular Twitter account respectively.
I didn’t know this functionality existed, so I put out a fishing tweet to see if anyone knew of any accounts running this feature other than the twitterapi account used by way of example in the API documentation. A response from Martin Hawksey (on whom I’m increasingly reliant for helping me keep up and get my head the daily novelties that the web throws up!), suggested it was a feature that has been quietly rolling out to premium users: Twitter Starts Rolling Out Contributors Feature, Salesforce Activated. Via his reading of that post (I think), Martin suggested that a Bing(;-) search for site:twitter.com “via web by” would turn up a few likely candidates, and so it did…
So why’s this interesting? Because given the ID of an account that a company users for corporate tweets, or the ID of a user who also contributes to a corporate account via their own account, we might be able to map out something of the corporate comms network for an organisation operating multiple accounts (maybe a company, but maybe also a government department or local council ,or lobbiest group), or the client list of “social media guru” operating various accounts for different SMEs.
Anyway, here’s quick script for exploring the TWitter contributors/contributees API. The output is a graphml file we can visualise in Gephi.
And here are a couple of views over what it comes up with. Firstly, a map bootstrapped from the @twitterapi account:
And here’s one I built out from HuffingtonPost:
So what do we learn from this? Firstly it’s yet another example of how networks get everywhere. Secondly, it raises the question (for me) of whether there are any cribs in other multi-contributor social network apps (maybe in tweet metadata) that allow us to identify originating authors/users and hence find a way into mapping their contribution networks.
As well as building out from an account name to which users contribute, we can bootstrap a map from a user who is known to contribute to one or more accounts (code not included in Github gist atm).
So for example, here’s a map built out from user @VeeVee:
I guess one of the next questions from a tool building point of view is: is there a more reliable way of getting cribs into possible contributor/contributee networks? Another is: are any other multi-contributor services (on Twitter or other networks, such as Google+) similarly mappable?
PS Just noticed this: Google to drop Google Social API. I also read on a Google blog that the Needlebase screenscraping tool Google acquired as part of the ITA acquisition will be shut down later this year…
Big red circles around things always catch my attention which is why the TechCrunch article stuck in my mind (which I only went looking for to try and work out what was going on). On more trawl for more info it was interesting to see the Salesforce twitter account had gone back to ^AB type signatures to use their ‘Social Media Monitoring and Engagement’ platform radian6 for tweeting. Given that the search term is ‘via web by’ suggests that almost 2 years on Twitter hasn’t got around to a post as a contributor part of their API (imagine this has left some businesses scratching their head).
The HuffingtonPost web is interesting. Given that it appears updates are via the web why bother with a network of nameless accounts.
[Didn’t know Social API was closing – balance is restored ;)]
Martin
Hi there,
Goldsmiths CAST student here. I get the following error when I run your script:
Traceback (most recent call last):
File “twContribs.py”, line 15, in
fpath=’/’.join([‘reports’,’contributors’,’_’.join(args.contributeto)])
TypeError
—
Am I doing something wrong here?
Thanks,
Sam
@sam how are you running the script… it’s all a bit clunky (didn’t I warn you?!;-)
An example way of calling it is:
python twContribs.py -contributeto twitterapi -depth 5
PS I also updated the gist just now to the copy I currently have running locally, just in case..
@tony Thanks for this, the updated script seemed to work, however, now I only have one Node when I open the graph.graphml into Gephi. Should I change something in the script?
Thanks
Sam
@tony. Not sure what’s happening, as it looks like I’m putting in the required data in lines 6-9:
parser = argparse.ArgumentParser(description=’Mine Twitter account contributions’)
parser.add_argument(‘-contributeto’,nargs=’*’, help=”MichelleObama,whitehouse,datastore,castlondon,GdnDevelopment”)
parser.add_argument(‘-contributeby’,nargs=’*’, help=”danmcquillan,zia505,datastore,castlondon,AlexGraul”)
parser.add_argument(‘-depth’,default=3,type=int,metavar=’N’,help=’5′)
—
But I get this at the ned of my python output and no nodes:
fetching fresh copy of fetched url:
oops
{‘userlist': [], ‘graph': , ‘contributors': {‘twitterapi': []}, ‘accountlist': [‘twitterapi’], ‘contributees': {}}
contributors {‘twitterapi': []}
contributees {}
accountlist [‘twitterapi’]
userlist []
@sam is simplejson and urllib2 loading? From a console, run Python (just type: python); then:
import simplejson
import urllib2
Or start putting print statements everywhere to try to track what’s going on;-)
@sam: also note that: ‘help=”A space separated list of account names (without the @) for whom you want to find the contributors.”)’ is a statement that appears when you you call the help file relating to the script from the command line ( python twContribs.py -h ), not a “PUT SPACE SEPARATED VALUES HERE” instruction to the user.
The ‘parser’ commands in the script set up a parser that python uses when you execute a command from the command line. So from the command line, if you type something like:
python twContribs.py -contributeto twitterapi
the script knows about -contributeto
whereas it doesn’t know about:
python -someRandomCommandLineArgument somerandomvalue
(If you look up the python documentation – use your favourite search engine to search for: python argparse – it will explain what argparse is about.)
Also note that the script accepts space separated multpiple values [help=”A space separated list of account names (without the @) for whom you want to find the contributors.”] so you can run things like: python twContribs.py -contributeto twitterapi starbucks
If you try comma separated vals, it probably won’t work…
It’s also worth bearing in mind that most accounts aren’t associated with contributions to/by other accounts…
@sam oh yes, one final thing… the script uses unauthenticated twitter api calls, so it maxes out quite quickly (150 calls an hour). I should probably print an error message when this happens, but I don’t (feel free to add it into the script). A quick way to check (though it uses an API call) is just to call the API from your browser eg paste:
into your browser location bar. If you get a message along the lines of “Error – too many calls/API rate limit exceeded, back off for an hour..” then you’ve maxed out for a bit… You can get more calls per hour using OAuth/authenticated calls to the API, but that’s more code, more things to go wrong, etc etc.
@tony Ah, understood. Thanks for your input – I’ll give this a try – appreciated.
@sam any joy?
@tony Almost got it working – just a few glitches which I am ironing out though printing – will give it another try first thing in the morning – my eyes are going a bit matrix at the mo’. Thanks for your help!
@sam what sort of glitches??? The script is tiny – would be handy for me to know how many different things can go wrong with it…. ;-)
@tony Nothing wrong with your script – just one of my libraries not properly installed. All sorted now. Thanks for your help. :-)
@sam Ah, thanks… I maybe need to write a diagnostic script that tests for libraries I commonly use that folk can run as a test script; would that be useful?
@tony That would be extremely useful. Thanks. | http://blog.ouseful.info/2012/01/23/mapping-corporate-twitter-account-networks-using-the-twitter-multiple-author-contributionscontributees-api/ | CC-MAIN-2015-32 | en | refinedweb |
I confess to a deep fascination with the seemingly mundane topic of logging. Software crashes, shopping cart abandonment, and security breaches are among the many situations in which you’ll find yourself poring over logs trying to figure out what went wrong. Like many a developer and network administrator, I honed my Perl programming chops doing the kinds of data reduction and analysis for which that language is ideally suited.
Yet no amount of Perl magic can save the day if your logs capture too little or wrongly focused data. And that’s a bit of a catch-22. To do good sleuthing you’ve got to have deployed the right kinds and levels of instrumentation. But as the data begins to tell its tale, it suggests the need for more or different instrumentation. Because the feedback loop is often attenuated, it’s a real challenge to strike the right balance.
Why not just log everything? Even today’s capacious disks fill up quickly when you turn your loggers’ dials to 10. So adaptive logging is becoming a hot research topic, especially in the field of security. The idea is to let your loggers idle until something suspicious happens, then crank them up. Of course, defining what’s suspicious is the essence of the challenge. Network forensics experts say that it takes, on average, 40 hours of analysis to unravel a half-hour of attack activity — and that’s after the fact. Will autonomic systems someday be able to generate and test hypotheses in real time, while adjusting instrumentation on the fly? I hope so, but I’ll believe it when I see it.
In the field of Web analytics, it’s been fairly straightforward to correlate user interaction with the clickstream recorded in a Web server’s log, but the changing architecture of Web software now threatens old assumptions. When I gave a talk describing how rich Internet applications can converse with Web services, a Web developer in the audience asked, “Where are the logs?” That’s a good question. Local interaction with a Java or .Net or Flash application won’t automatically show up in the clickstream, nor will SOAP calls issued from the rich client. You have to make special provisions to capture these events. That’s eminently doable, but I worry that if logging isn’t always on by default, vital information will often go unrecorded. On the other hand, clickstreams don’t necessarily correlate well to behaviors you’d like to understand. The XML message patterns of a services-based application may enable higher-level and more meaningful analysis.
It’s fun to speculate, but meanwhile our systems keep accumulating logs. How can we deal with them more effectively? Over the years I’ve developed some simple strategies. In the security realm, for example, I like to watch the size of my logs day by day. That’s an easily obtained baseline; deviation from it tells me to look under the hood.
When you want to do Web analytics, here’s a tip: Intelligent namespace design can dramatically simplify the chore. If you consistently embed categories, dates, or other selectors into your URLs, it’s easy to view your logs along those dimensions. I steer clear of content management systems and log analysis tools that don’t offer such flexibility.
Logs can flood us with information, or they can tell us compelling stories. We can influence the outcome by artful and iterative refinement of the data we collect. | http://www.infoworld.com/d/developer-world/artful-logger-021 | crawl-002 | en | refinedweb |
[ Usenet FAQs | Search | Web FAQs | Documents | RFC Index ]
Search the FAQ Archives
Search the FAQ Archives
-
Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Part8 - Part9 - MultiPage
INN FAQ Part 2/9: Specific notes for specific operating systems
From: hwr@pilhuhn.de (Heiko W.Rupp) Newsgroups: news.software.nntp, news.software.b Subject: INN FAQ Part 2/9: Specific notes for specific operating systems Date: 9 Dec 1997 03:25:31 +0100 Message-ID: <faq.p2_881634325@pilhuhn.de> Summary: This article is part 2 of a multi-part FAQ: Part 2: Advice specific to certain operating systems. Posted-By: post_faq 2.10 Archive-name: usenet/software/inn-faq/part2 Last Changed: $Date: 1997/09/23 01:25:52 $ $Revision: 2.34 $ Part 2 2/9 ===================================================================== TABLE OF CONTENTS FOR PART 2/9 ===================================================================== SPECIFIC NOTES FOR SPECIFIC OPERATING SYSTEMS: 2.1 BASH tips 2.2 GNUS tips 2.3 AIX tips 2.4 SunOS 4.1.1 tips 2.5 Ultrix tips 2.6 HP-UX tips 2.7 UnixWare tips 2.8 Linux tips 2.9 A/UX 3.0 (Macintosh) tips 2.10 Alpha OSF tips 2.11 SGI IRIX 5.x tips 2.12 Systems where only root can have "cron" jobs. 2.13 System V based Unixes (SVR4, Solaris 2.x, SCO ODT 3.0, AIX, A/UX, DELL, ...) 2.14 Solaris 2.x special needs 2.15 Slackware Tips 2.16 BSDi 2.0 / FreeBSD / NetBSD 2.17 3Com Router users 2.18 NOV problems on a Pyramid 2.19 Warnings to people that must set HAVE_UNIX_DOMAIN to DONT 2.20 INN for SNI RM400 2.21 INN on NeXT-/OpenStep Note: See also Appendix B (Part 9 of the FAQ) ====================================================================== SPECIFIC NOTES FOR SPECIFIC OPERATING SYSTEMS ======================================================================
Subject: (2.1) BASH tips If you are using a Unix who's /bin/sh is a hardlink to bash, you'll find problems using nntpsend. nntpsend uses a variable named PPID, which is a read-only variable in BASH. You'll get errors that look like this: sh: PPID: read-only variable You can fix it using the the following patch: *** nntpsend~ Thu Aug 12 03:36:16 1993 --- nntpsend Sat Oct 23 15:54:11 1993 *************** *** 1,4 **** ! #! /bin/sh ## $Revision: 2.34 $ ## Send news via NNTP by running several innxmit processes in the background. ## Usage: --- 1,4 ---- ! #!/usr/local/bin/bash ## $Revision: 2.34 $ ## Send news via NNTP by running several innxmit processes in the background. ## Usage: *************** *** 130,140 **** chmod 0660 ${LOG} exec >>${LOG} 2>&1 fi ! PPID=$$ ! echo "${PROGNAME}: [${PPID}] start" ## Set up environment. ! export BATCH PROGNAME PPID INNFLAGS ## Loop over all sites. cat ${INPUT} | while read SITE HOST MAXSIZE FLAGS; do --- 130,140 ---- chmod 0660 ${LOG} exec >>${LOG} 2>&1 fi ! CPID=$$ ! echo "${PROGNAME}: [${CPID}] start" ## Set up environment. ! export BATCH PROGNAME CPID INNFLAGS ## Loop over all sites. cat ${INPUT} | while read SITE HOST MAXSIZE FLAGS; do *************** *** 240,246 **** fi ## Start sending this site in the background. ! export SITE HOST LOCKS BATCHFILE PROGNAME PPID SIZE TMPDIR sh -c ' BATCHFILE=${HOST}.nntp LOCK=${LOCKS}/LOCK.${HOST} --- 240,246 ---- fi ## Start sending this site in the background. ! export SITE HOST LOCKS BATCHFILE PROGNAME CPID SIZE TMPDIR sh -c ' BATCHFILE=${HOST}.nntp LOCK=${LOCKS}/LOCK.${HOST} *************** *** 247,253 **** trap "rm -f ${LOCK} ; exit 1" 1 2 3 15 shlock -p $$ -f ${LOCK} || { WHY="`cat ${LOCK}`" ! echo "${PROGNAME}: [${PPID}:$$] ${HOST} locked ${WHY} `date`" exit } if [ -f ${SITE}.work ] ; then --- 247,253 ---- trap "rm -f ${LOCK} ; exit 1" 1 2 3 15 shlock -p $$ -f ${LOCK} || { WHY="`cat ${LOCK}`" ! echo "${PROGNAME}: [${CPID}:$$] ${HOST} locked ${WHY} `date`" exit } if [ -f ${SITE}.work ] ; then *************** *** 254,259 **** --- 254,260 ---- cat ${SITE}.work >>${BATCHFILE} rm -f ${SITE}.work fi + if [ -s ${SITE} ] ; then mv ${SITE} ${SITE}.work if ctlinnd -s -t30 flush ${SITE} ; then cat ${SITE}.work >>${BATCHFILE} *************** *** 260,273 **** rm -f ${SITE}.work test -n "${SIZE}" && shrinkfile -s${SIZE} -v ${BATCHFILE} if [ -s ${BATCHFILE} ] ; then ! echo "${PROGNAME}: [${PPID}:$$] begin ${HOST} `date`" ! echo "${PROGNAME}: [${PPID}:$$] innxmit ${INNFLAGS} ${HOST} ..." eval innxmit ${INNFLAGS} ${HOST} ${BATCH}/${BATCHFILE} ! echo "${PROGNAME}: [${PPID}:$$] end ${HOST} `date`" else rm -f ${BATCHFILE} fi fi rm -f ${LOCK} ' & sleep 5 --- 261,275 ---- rm -f ${SITE}.work test -n "${SIZE}" && shrinkfile -s${SIZE} -v ${BATCHFILE} if [ -s ${BATCHFILE} ] ; then ! echo "${PROGNAME}: [${CPID}:$$] begin ${HOST} `date`" ! echo "${PROGNAME}: [${CPID}:$$] innxmit ${INNFLAGS} ${HOST} ..." eval innxmit ${INNFLAGS} ${HOST} ${BATCH}/${BATCHFILE} ! echo "${PROGNAME}: [${CPID}:$$] end ${HOST} `date`" else rm -f ${BATCHFILE} fi fi + fi rm -f ${LOCK} ' & sleep 5 *************** *** 275,278 **** wait rm -f ${INPUT} ! echo "${PROGNAME}: [${PPID}] stop" --- 277,280 ---- wait rm -f ${INPUT} ! echo "${PROGNAME}: [${CPID}] stop"
Subject: (2.2) GNUS tips In article <3g82ll$mr4@tid.tid.es> Emilio Losantos <emilio@tid.es> writes: > I have to use GNUS 4.1 to read news from a nntp server running INN 1.4, but > whenever I try to select a group I receive the message: > "GROUP" not implemented; try "help" > Could anybody tell me how to fix this problem? jbryans@csulb.edu (Jack Bryans) replies: Patch your nntp.el something like this: *** 72,77 **** --- 72,79 ---- (set-process-sentinel nntp/connection 'nntp/sentinel) (process-kill-without-query nntp/connection) (let ( (code (nntp/response)) ) + (nntp/command "mode reader") + (nntp/response) (or (eq code 200) (eq code 201)))) (defun nntp-server-opened () Note that your line numbers may vary. There's a lot of nntp.el's out there.
Subject: (2.3) AIX tips Q: Is there a config.data for AIX 4.1 ? A: In <> you will find one for AIX 4.1.4 and INN1.5.1. If you want to use this with older INN versions, then you have to remove some lines from it. Note that it might be that in the sample CLX_STYLE is set to IOCTL. You might change this to FCNTL as described below (#2.13) if you get many overchan processes. Q: In config.data, should ACT_STYLE be set to READ or MMAP? A: Gee, some say MMAP works, some say it doesn't. I recommend you use READ. After you've been running for a month, try MMAP for a day and see what happens. Kurt Jaeger <Kurt.Jaeger@RUS.Uni-Stuttgart.DE> adds: On 3.2.5, MMAP works if one makes some patch to the innd so that it allocates one byte more than filelength(active) or filelength(history [or whatever is mapped by innd]). Reason: If filelength(active/whatever file) on AIX is a multiple of page size (4096 bytes), searching for a trailing NUL byte in a MMAPed file will kill the process with SEGV or the AIX equivalent. Q: What compiler should I use? A: Most people use what's listed in Install.ms, though we have one report of a AIX 3.2.5 user that found bsdcc worked better. Q: When I run news.daily, there's always a few lines of error messages at the end of the output: | compress: bad file number A: AIX /usr/bin/compress has a bug when compressing files with zero length. Then it spits out this error. Solution: Ignore it or use a different compress program and change config.data accordingly. (from Kurt Jaeger <pi@rus.uni-stuttgart.de>) Q: innwatch doesn't work well from /etc/inittab, does it? A: Nope. Instead, you can create a "subsystem" with this command: mkssys -s innwatch -p /usr/local/news/bin/innwatch \ -u `id -u news` -G news -S -n 15 -f 9 Note that your path to innwatch may differ, depending on where you decided to install the inn components. You also need to enter the command as one long line. This will create a subsystem named "innwatch" belonging to an SRC group named "news". The "-S" means that it uses signals for SRC to tell it when to stop and the "-n" is the SIGTERM signal, for normal shutdown, and the "-f" is the SIGKILL signal, which is sent if the process does not stop within 20 seconds. Then, modify rc.news to issue the command startsrc -s innwatch to get innwatch going. That's it! Shane Castle <swcxt@boco.co.gov, swcxt@csn.org> Q: When I compile I get something like: 0706-317 ERROR: Unresolved or undefined symbols detected: Symbols in error (followed by references) are dumped to the load map. The -bloadmap:<filename> option will create a load map. .dbzwrit cd frontends ; make all ; cd .. Target all is up to date. A: That means you don't have a program called "patch" installed on your machine. Refer to "Subject: ld.so: Undefined symbol: _dbzwritethrough" Q: What can I change in innwatch.ctl to make it work right? A: The "df" command in AIX has a funny output that requires you to modify innwatch.ctl. The FTP site has an install.ctl that uses "df -i" (some AIX versions) and another one that uses "df -v" (recommended by someone with AIX 3.2.5). Q: Can I use a compressed filesystem? A: (From Kurt Jaeger): On AIX 4.1.x, use compressed filesystems with 512 bytes per fragment and 2048 bytes per inode. This is the best space optimazation I could find up to now. News is I/O bound, so doing some more compression to save on head seeks and reads will better balance your system. I currently have a 100/60% yield: If the disk would be 100% full, 60% of the inodes would be used.
Subject: (2.4) SunOS 4.1.1 tips SunOS 4.1.1 (but not 4.1.2 or 4.1.3) broke the write system call but a patch is available. Any write could fail "half way", it is just more likely to happen when writing large files and in-core DBZ writes the history file out in one chunk. The "Known Problems" section of the installation manual says to install Patch 100293-01, but that has been replaced by 100622-01.
Subject: (2.5) Ultrix tips Tip #1: Ultrix has a "mmap()" function, but it doesn't do the same thing as the SunOS/BSD mmap() function. Therefore, do not configure INN to use mmap() on a Ultrix system. INN wants to find a mmap() function that is like the one on SunOS/BSD systems. Tip #2: The sendsys script breaks Ultrix 'nawk'. You can make a 1-line change or you can switch to 'awk' or "gawk". Original line: ${AWK} "/^$1"'[/:\\]/,/[^\\]$/' ${NEWSFEEDS} >${TEMP} Modified line: ${AWK} "/^$1"'[\/:\\]/,/[^\\]$/' ${NEWSFEEDS} >${TEMP} The original line will work with awk, gawk, but not nawk. The modified line will work with awk, gawk, or nawk. If you have gawk running on your machine use it. Otherwise, switch to awk. Tip #3: The syslog on Ultrix sucks rotten eggs and Digital refuses to fix it. (source: everyone that uses Ultrix and has ever used other systems) Luckily, you can replace it with the routine that comes with INN. However, some people have had better luck installing the syslog that can be found on "gatekeeper.dec.com:/pub/DEC/jtkohl-syslog-complete.tar.Z". It still works with old clients but does new-style syslogging, too. Works great for me so far. (this information from: nelson@reed.edu (Nelson Minar)). The syslog that is shipped with INN works pretty well but there have been some claims that some old clients don't like it.
Subject: (2.6) HP-UX tips Q. My logs keep telling me there is no space for articles A. Edit innwatch.ctl to use "bdf" instead of "df". Q. I am running inn on an HP machine. INN won't start up automatically. I can start it manually. There is no problem with news or INN once it is started. A. Try adding a "sleep 10" to the bottom of /etc/rc.news, or in /etc/rc, right after /etc/rc.news is invoked. On some machines, including HP, the shell started by "#!/bin/sh" when /etc/rc is executed will exit before innd has disassociated itself from that shell. This causes innd to exit, sometimes without printing an error message. (source: pjoslin@mbvlab.wpafb.af.mil (Paul Joslin )) This problem goes away if you set HAVE_SETSID to "DO". Something to do with Posix Session Leader concepts. Ick. (source: Steve Howie <showie@uoguelph.ca>). You can also do something like: echo /usr/lib/etc/rc.news | at now + 2 minutes or else nohup su news -c /usr/local/etc/rc.news& HP-UX 8.x and 9.x users might find a problem with getting innwatch to start up. People have found that having "at" start it seems to work more reliably than other methods: $. Q: INN-1.4sec running on an HP9000 s700 with HP-UX 9.01 leaks memory like crazy. The innd process grows and grows, then stops with: "ME cant remalloc 8192 bytes Not enough space" A: The cause turns out to be a memory leak in the standard C library (both /lib/libc.a and /lib/libc.sl). Installed patch PHCO_5056 (or the latest libc patch).
Subject: (2.7) UnixWare tips UnixWare 1.1.2 works with domain sockets. Install ptf149 "unix domain sockets" and ptf678 "fix for sockmod's incorrect handling of disconnect indication" Otherwise, configure like any SVR4 system.
Subject: (2.8) Linux tips Get inn-1.4-linux-0.1.tar from It contains instructions for installing INN on a Linux system and a working config.data file. (from ghio@myriad.pc.cc.cmu.edu) If you don't follow the directions in inn-1.4-linux-0.1.tar, here are some of the problems you might have: > nntpsend.log says the following. > nntpsend: [214:222] innxmit -a -t300 -T1800 > travelers.mail.cornell.edu ... > Ignoring line "cornell/test/13 805 ..." > sh: PPID read-only variable Tomasz Surmacz <tsurmacz@ict.pwr.wroc.pl> writes: If you are using INN under Linux or have your /bin/sh a symlink to /bin/bash the above problem appears (in nntpsend precisely speaking, not innxmit) The problem is that bash already defines the PPID variable and nntpsend is trying to use it too. To fix this: 1. comment out line PPID = $$ 2. change all occurrences of PPID to say PARENTPID I have also noticed that changing first line of nntpsend from '#!/bin/sh' to "#!/bin/bash" helps in such occasions. Slackware 3.0 seems to have a different incarnation of df than others - so if you want to run innwatch change the following in innwatch.ctl: From: < ## =()<!!! df -i . | awk 'NR == 2 { print $3 }' ! lt ! @<INNWATCH_SPOOLNODES>@ ! throttle ! No space (spool inodes)>()= < !!! df -i . | awk 'NR == 2 { print $3 }' ! lt ! 200 ! throttle ! No space (spool inodes) To: > ## =()<!!! df -i . | awk 'NR == 2 { print $4 }' ! lt ! @<INNWATCH_SPOOLNODES>@ ! throttle ! No space (spool inodes)>()= > !!! df -i . | awk 'NR == 2 { print $4 }' ! lt ! 200 ! throttle ! No space (spool inodes) (from Jim Kerr <jak7@opsirm1.em.cdc.gov>) ------ Linux 2.x complains at compiling: gcc -O -o nnrpd article.o group.o commands.o misc.o newnews.o nnrpd.o post.o loadave.o ../libinn.a loadave.o(.text+0x3b): undefined reference to `nlist' You can either add /usr/lib/libelf.a as missing library to the Makefile or apply the following patch (with some fuzz ... ) from coneill@premier1.premier.net (Clayton O'Neill) : --- /usr/local/news/INN/nnrpd/loadave.c Fri Jan 29 10:51:58 1993 +++ loadave.c Wed Jul 17 15:36:30 1996 @@ -1,8 +1,28 @@ -/* $Revision: 2.34 $ +/* $Revision: 2.34 $ ** */ #include "nnrpd.h" #if NNRP_LOADLIMIT > 0 +#ifdef linux + +/* +** Get the current load average as an integer. +*/ +int +GetLoadAverage() +{ + FILE *ProcLoadAve; + float load; + + if ((ProcLoadAve=fopen("/proc/loadavg", "r"))==NULL) + return -1; + if (fscanf(ProcLoadAve,"%f", &load)!=1) + return -1; + fclose(ProcLoadAve); + return (int)(load+0.5); +} + +#else #include <nlist.h> [...] + #endif /* linux */ #endif /* NNRP_LOADLIMIT > 0 */ ------ In some newer versions of Linux, nnrpd seems to seg fault. Reason is the size of fd_set (1024 bit), but the macros FD_??? operate on 256bit). This happens if HAVE_UNISTD is set to DONT, so set it to DO. If it still happens, then include <sys/time.h> at the top of include/clibrary.h ------ It might be that rc.news never terminates in unoff4 (and probably other versions), when having DOINNWATCH=true. If this happenes then include a '&' in rc.news as shown: : ${DOINNWATCH} && { : ( sleep 60 ; ${INNWATCH} & ) & ^^^
Subject: (2.9) A/UX 3.0 (Macintosh) tips Tip #1: Use the INN malloc. Tip #2: If you are running INN 1.4 on a Mac running A/UX 3.0.1, Every so often, (generally when someone fires up a reader), INN goes berserk. Syslog says: innd: ME cant select Bad file number This message repeats about 20 times per second. It freezes up my computer and I need to reboot. That's a kernel bug. You do have to reboot. If you compiled inn with gcc, don't. My experience was that somehow, if INN was compiled with GCC the kernel bug is triggered, but that doesn't happen with cc.
Subject: (2.10) Alpha OSF tips: To compile INN for the DEC Alpha, follow the instructions in the INN patch archive on A config.data file for OSF1.3a is in: A config.data file for OSF3.0 is in: In rc.news you need to start $INNWATCH using the following: $.
Subject: (2.11) SGI IRIX 5.x tips Some people have reported that IRIX 5.1 isn't very reliable and that it is worth it to run 5.2. 5.3 is even better, but it is still not perfect. (In other words: IT WORKS FINE AS Install.ms DESCRIBES!) Robert Keller <rck@fangio.asd.sgi.com> has some tips for filesystem layout: NOTE: For efs filesystems, you want to be sure that you mount your news spool using the lbsize option (/etc/fstab) set to 4096, eg: /dev/dsk/dks1d5s7 /spool efs rw,raw=/dev/rdsk/dks1d5s7,lbsize=4096 0 0 This tells efs to only preallocate 4K worth of space on the first write of a file to disk. The default of 32K causes a terrible waste of effort for the writing of an average 2K news posting. This also can Innd slow down quite a bit, as the efs is searching for spare 32kB blocks on disk. If you can use xfs then do so. For the new xfs filesystems, you want to be increase the default filesystem block size from 512 bytes to about 2K for maximum performance. I just setup a 8 Gig xfs news spool on a Challenge L using 2K blocks and the performance is absolutely incredible. See also <> for more tips on running INN on SGI platforms. Another note to the 5.3XFS: (From: olson@anchor.engr.sgi.com (Dave Olson)) The ordering/location of files in a directory can change when files are unlinked, with xfs, and some of fastrm's assumptions therefore break. So if you get files which are to be expired with fastrm, but which stay in spool, then try to use normal expire or edit expirerm to remove -s option from RMPROC: old: RMPROC="fastrm -e -s ${SPOOL}" new: RMPROC="fastrm -e -u ${SPOOL}" 6.2XFS has been changed to respect the traditional readdir() behaviour (after rck@fangio.asd.sgi.com (Robert Keller)). Jack Bryans <jbryans@csulb.edu> writes: Both ACT_STYLE and DBZCFLAGS may use MMAP. If you use either, you'll need the following patch: *** include/clibrary.h.orig Thu Mar 18 13:04:07 1993 --- include/clibrary.h Sat Mar 9 14:13:40 1996 *************** *** 103,109 **** --- 103,111 ---- extern POINTER malloc(); extern POINTER realloc(); #if defined(ACT_MMAP) + #ifndef __sgi extern char *mmap(); + #endif /* not sgi */ #endif /* defined(ACT_MMAP) */
Subject: (2.12) Systems where only root can have "cron" jobs. Your cron jobs may not work if you use: su news -c /usr/lib/news/bin/news.daily delayrm expireover Instead, you must put the entire command in quotes. Like this: su news -c "/usr/lib/news/bin/news.daily delayrm expireover" Look for "Pyramid" later in this FAQ for the interesting details.
Subject: (2.13) System V based Unixes (SVR4, Solaris 2.x, SCO ODT 3.0, AIX, A/UX, DELL, ...) NOTE: Solaris 2.x is based on SVR4.0. These tips are useful in a limited way. Read this section for general advice, but follow the "Solaris 2.x" section details. If you are running any non-BSD (i.e. System V based) Unix you MUST have the following option set: ## How should close-on-exec be done? Pick IOCTL or FCNTL. #### =()<CLX_STYLE @<CLX_STYLE>@>()= CLX_STYLE FCNTL This includes SVR4, Solaris 2.x, A/UX and SCO ODT 3.0. (SVR4 means systems based on System V Release 4 from USL. Please check your manual to see if your operating system is based on SVR4.) This CLX_STYLE setting is clearly stated in the Install.ms file and repeated here since so many people post to news.software.nntp after ignoring the warnings. If CLX_STYLE isn't set to FCNTL, you'll get tons of overchan processes hanging around. With SCO ODT 3.0 and MOST systems, innd will link and run if you use IOCTL but eventually will stop answering incoming calls. Don't be fooled. Just because it compiles doesn't mean it's going to work. If you start innd on an AT&T SysV Rel 4.0 machine and get syslog messages like: localhost:15 cant setsockopt(SNDBUF) Protocol error localhost:15 cant setsockopt(RCVBUF) Protocol error then you should FIRST try to change HAVE_UNIX_DOMAIN to "DONT" in config.data. If that doesn't fix the problem, you should add "-USO_SNDBUF" to your DEFS parameter in config.data. Or, you can comment out the "setsockopt()" calls. This is also mentioned in the Install.ms file (which means if you needed to read it here, you weren't paying attention when you read Install.ms) Many SVR4 for i486 binaries (sendmail, mh, vmail, innd, rnews are now on in pub/comp/i486/svr4/*.SVR4.tgz But remember that some of the above need site specific changes, so their usefulness may be limited. If you get syslog messages that say, "ME cant accept RCreader" please refer to Part 3 of this FAQ. DELL ships their Unix with /dev/log chmod'ed to 0644 which means nobody can syslog anything. Pretty stupid, eh? INN uses syslog extensively. If you find that you don't get any syslog messages check to see if you need to: "chmod 0666 /dev/log". Nobody knows why SVR4 boxes often give error messages like, "innd: accept: SIOCGPGRP failed errno 22". There's some sort of obscure bug with the SVR4 accept() call that can lead to these messages, if the executable is linked a certain way. I suspect that the same symbol -- for two totally separate variables or routines -- is defined in two different libraries, so if you link in certain ways you get the "wrong" thing. This error drove me crazy when I first built sendmail V8 on our NCR 3000 box. But I re-linked it a different way and I haven't seen the error since. Good riddance. I suggest you play around with your link libraries and/or order of linkage. kevin@cfc.com (Kevin Darcy) says he never gets these messages since he started using (in config.data): LIBS -lsocket -lnsl -lelf If your SVR4 system still doesn't run correctly, check the Solaris 2.x suggestions.
Subject: (2.14) Solaris 2.x special needs Solaris 2.5: Sun assures that Solaris 2.5 does no longer have the socket bug (see fix #7 below) and Dave Zavatson <dhzavatson@ucdavis.edu> writes that the bug still exists ... So if you see "'resource temp unavailable' errors, you have to apply it. Joe St Sauver <JOE@OREGON.UOREGON.EDU> submitted the following: | Symptom: One of the topologically distant sites notices far lower than normal | article throughput. Further investigation by the remote site (using netstat) | identifies a large number of "completely duplicated packets" originating | with the Solaris feed host. | Resolution: The local Solaris 2.5 host had not applied Sun patches 103169-05 | ("ip driver and ifconfig fixes") and 103447-03 ("tcp patch") as can be | obtained from | (Solaris 2.5.1 users, see 103582-01 and 103630-01). | Without these patches, when working with hosts that are topologically | remote, TCP/IP throughput reportedly can drop to as little as 5% of | what it should be. | For further information, see: <199607140422.VAA04495@yorick.cygnus.com> | quoting a 7 June 1996 article posted to comp.unix.solaris by Cathe A. Ray | (Manager of Internet Engineering for Sun). | Thanks to Howard Goldstein <hgoldste@bbs.mpcs.com> for the detective work in | isolating and resolving this problem! SOLARIS 2.4: Install the Recommended cluster patch from Sun. The Recommended cluster patch is: The README is: Then follow the directions in. The patch needs to be applied BY HAND, it is not in the correct format to work with Larry Wall's patch program. Also, do *not* link with the /usr/ucblib stuff, and HAVE_WAITPID should be set to "DO". On 3/25/95 Sun introduced patch 101945-23 which fixes bug #1178506 titled "INN wounded after upgrade to SunOS 5.4". This fixes the "cant read Resource temporarily unavailable" bug that some have reported. But Even if the Sun Patch mentions "1186224 socket select hangs in NON-BLOCKED mode", this seems not to be totally fixed. Ian Dickinson <idickins@fore.com> doesn't notice it on his lightly loaded server. But on heavily loaded machines, it occurs occasionally (<5 times a day). See below for a patch (Solaris Fix #7 ) It seems that the last version of the kernel patch for Sparc is 19945-36; 191945-29 is known to work. For x86 the latest version is 101946-29, which has problems with Unix domain sockets, so 101946-12 seems to be the last usable one here ... Include /opt/SUNWspro/bin and /usr/bin in your path before /usr/ucb as /usr/ucb/sed does not work well. SOLARIS 2.3: If you install the "Recommended cluster patch" I *think* you will only need to pay attention to Fix #5 listed below. It would be helpful if people sent an update about this. The Recommended cluster patch is: The README is: (note: If you trust other people to compile programs for you [especially ones that run as root] you can get inn1.4sec pre-compiled w/gcc at) INN works with Solaris 2.[0123]. It's not easy, but it will work. The problem is that depending on which Solaris patches you have installed, you have to install various INN patches. There are too many combinations of Sun patches and INN patches to be able to say what is required and what isn't. (See the "SOLARIS 2.3" tip above for one tried and tested configuration). Here is the general guide: Step 1: Use the info for config.data for Solaris 2.x that is included Install.ms. Step 2: As you go, if you get any of the problems listed below, try the fix listed. Eventually you will be up and running with only the fixes you need. If you try to install ALL the fixes at once, things will definitely not work. COMPILER TIPS: Use gcc or /opt/SUNWspro/bin/cc. Do *not* use /usr/ucb/cc. In fact, remove /usr/ucb from your path when you compile. For directory structure - be careful about /var/news, as the news(1) tool also writes in this area an might damage your files. (Need more input on this). The patch program supplied with Solaris 2.5 appears to not understand the "new-style" context diffs which virtually everyone uses these days so you have to fetch the gnu-patch as described in part8 of this FAQ. Also it doesn't know -p0 option ; it wants -p 0 and the file to patch has to be writable. ---------- Solaris Fix #1 Under Solaris 2.[012] (SunOS 5.0, 5.1, 5.2) you must add the following at the beginning of each file using gethostbyname(): #define gethostbyname __switch_gethostbyname Under Solaris 2.3 gethostbyname() might work without changes depending on your configuration. We haven't figured out when they work and when they don't. If you run into problems, try to change "gethostbyname()" to "solaris_gethostbyname()" and then use the gethostbyname() listed in the Solaris Porting FAQ. This isn't a perfect solution, because you now need a different binary for Solaris 2.[012] systems. It also seems to be a good idea to put dns in front of nis in /etc/nsswitch.conf hosts: dns nis files It would be great if someone were to submit a solaris_gethostbyname() function who's binary works under all Solaris revs and gives all the semantics of BSD gethostbyname(). In particular, one that doesn't have the problems discussed in sun bugid #1126573 or #1135988. It would be amazing if this was submitted by one of the many Sun employees that flame the INN FAQ maintainer in comp.sys.sun.admin every time he bitches about how much he hates Solaris 2.x. :-) ---------- Solaris Fix #2 Under all Solaris 2.* versions there is a problem with innwatch.ctl. It expects to use "df -i" to find out how many inodes are free on your disk. /usr/{sbin,5bin,bin}/df doesn't support the "-i" option, it has a "-e" option that outputs the info you want, but in a different format. You should use "/usr/ucb/df -i" instead, since this version of df includes the "-i" option. If you have too much space left on your disks (;-)) you will see the following: Filesystem iused ifree %iused Mounted on /dev/md/dsk/d10 103495213433720 7% /var/spool/news So awk will print 7% as number of free inodes ... Ian Dickinson <idickins@fore.com> wrote a inndf which can be found at the usual place. This inndf compiled with gcc and -DHAVE_STATVFS seems to work though (after Nash E. Foster <nef10958@usln1b.glaxo.com> ). A new version of this is available which works with large filesystems is available from If you have your news spool NFS mounted from another box, which is absolutely not recommended (see #5.15 , ME cant nonblock), then the following might help: rsh other_box /usr/ucb/df -u /var/spool/news /usr/ucb/df is part of the BSD Compatibility stuff. If you loaded Solaris 2.x without that, you can replace innwatch.ctl's disk checks with these lines: ## If load is OK, check space (and inodes) on various filesystems ## =()<!!! /usr/bin/df -k . | awk 'NR == 2 { print $4 }' ! lt ! @<INNWATCH_SPOOLSPACE>@ ! throttle ! No space (spool)>()= !!! /usr/bin/df -k . | awk 'NR == 2 { print $4 }' ! lt ! 8000 ! throttle ! No space (spool) ## =()<!!! /usr/bin/df -k @<_PATH_BATCHDIR>@ | awk 'NR == 2 { print $4 }' ! lt ! @<INNWATCH_BATCHSPACE>@ ! throttle ! No space (newsq)>()= !!! /usr/bin/df -k /news2/spool/out.going | awk 'NR == 2 { print $4 }' ! lt ! 800 ! throttle ! No space (newsq) ## =()<!!! /usr/bin/df -k @<_PATH_NEWSLIB>@ | awk 'NR == 2 { print $4 }' ! lt ! @<INNWATCH_LIBSPACE>@ ! throttle ! No space (newslib)>()= !!! /usr/bin/df -k /news2/privcontrol | awk 'NR == 2 { print $4 }' ! lt ! 40000 ! throttle ! No space (newslib) ## =()<!!! /usr/bin/df -k @<_PATH_OVERVIEWDIR>@ | awk 'NR == 2 { print $4 }' ! lt ! @<INNWATCH_OVERVIEWSPACE>@ ! throttle ! No space (overview)>()= !!! /usr/bin/df -k /news3/overview | awk 'NR == 2 { print $4 }' ! lt ! 6000 ! throttle ! No space (overview) ## =()<!!! /usr/bin/df -e . | awk 'NR == 2 { print $2 }' ! lt ! @<INNWATCH_SPOOLNODES>@ ! throttle ! No space (spool inodes)>()= !!! /usr/bin/df -e . | awk 'NR == 2 { print $2 }' ! lt ! 200 ! throttle ! No space (spool inodes) ---------- Solaris fix #3 Don't run the "lint" step if you use Solaris. In fact, nobody needs to execute this step except Rich, when he's writing new code. If you have a Solaris machine without "lint", just make "lint" a symlink to "/bin/echo". ---------- Solaris fix #4 People running Solaris 2.3 have built INN with HAVE_UNIX_DOMAIN set to TRUE and everything seems to be ok. I guess Sun has fixed enough bugs in 2.3 to make it usable. I recommend the latest "recommended patches" if you run any version of Solaris 2.x. To install all of the "Recommended Patches" in one command, refer to: ---------- Solaris fix #5 If "inews" outputs "Bad Message-ID" when posting Under Solaris 2.x (where x = 0, 1, 2 or 3) you need to change the file "getfqdn.c". Find the lines that read: if (strchr(hp->h_name, '.') == NULL) { /* Try to force DNS lookup if NIS/whatever gets in the way. */ (void)strncpy(temp, buff, sizeof buff); (void)strcat(temp, "."); hp = gethostbyname(temp); } and delete them. ---------- Solaris fix #6 If posting gets you "441 Can't generate Message-ID, Error 0" and you are running with DNS, then the problem is with Solaris 2.3's gethostbyname. dns. If you ask for a host with "hostname." it returns "hostname." instead "hostname.yourdomain.com" as expected by nn. The workaround is to define "domain" in your inn.conf and apply the following patch to getfqdn.c: *** getfqdn.c.~1~ Sun Sep 4 09:02:37 1994 --- getfqdn.c Sun Sep 4 09:53:11 1994 *************** *** 35,45 **** if ((hp = gethostbyname(buff)) == NULL) return NULL; ! if (strchr(hp->h_name, '.') == NULL) { ! /* Try to force DNS lookup if NIS/whatever gets in the way. */ ! (void)strncpy(temp, buff, sizeof buff); ! (void)strcat(temp, "."); ! hp = gethostbyname(temp); ! } ! if (hp != NULL && strchr(hp->h_name, '.') != NULL) { if (strlen(hp->h_name) < sizeof buff - 1) return strcpy(buff, hp->h_name); --- 35,39 ---- if ((hp = gethostbyname(buff)) == NULL) return NULL; ! if (strchr(hp->h_name, '.') != NULL) { if (strlen(hp->h_name) < sizeof buff - 1) return strcpy(buff, hp->h_name); ---------- Solaris fix #7 From Ian Dickinson <ian@fore.com>: Sun appear to reduced the frequency of the problem, but not fixed the bug itself. I still need this under SunOS5.4 101945-29. You should already have -DSUNOS5 in your DEFS setting in config.data anyway. (Note that in 1.5.x this workaround is already in the source. You can enable with with specifying -DPOLL_BUG in the DEFS settings in config.data. Thanks to rhaskins@shiva.com who pointed that out). This should apply - maybe with a bit of fuzz: *** innd/chan.c.ORIG Wed Dec 14 11:03:16 1994 --- innd/chan.c Thu Dec 15 17:00:54 1994 *************** *** 497,502 **** --- 497,508 ---- bp->Left = bp->Size - bp->Used; i = read(cp->fd, &bp->Data[bp->Used], bp->Left - 1); if (i < 0) { + #ifdef SUNOS5 + /* return of -2 indicates EAGAIN, for SUNOS5.4 poll() bug workaround */ + if (errno == EAGAIN) { + return -2; + } + #endif syslog(L_ERROR, "%s cant read %m", p); return -1; } *** innd/nc.c.ORIG Thu Mar 18 21:04:28 1993 --- innd/nc.c Thu Dec 15 17:00:41 1994 *************** *** 783,788 **** --- 783,794 ---- /* Read any data that's there; ignore errors (retry next time it's our * turn) and if we got nothing, then it's EOF so mark it closed. */ if ((i = CHANreadtext(cp)) < 0) { + #ifdef SUNOS5 + /* return of -2 indicates EAGAIN, for SUNOS5.4 poll() bug workaround */ + if (i == -2) { + return; + } + #endif if (cp->BadReads++ >= BAD_IO_COUNT) { if (NCcount > 0) NCcount--; ---------- Solaris fix #8 From: Joe St Sauver <joe@decoy.uoregon.edu> We recently upgraded some machines in our news farm to fast ethernet, and after doing so we noticed poor performance (ping times of 30msec between two machines each connected to dedicated switch ports on the same switch...). Poking around a little, we noticed that under Solaris 2.5, tcp_conn_req_max is set to 32 by default, which is a little low if you are working with a fair number of peers or have a lot of readers. We bumped that value to 1000 or so (1024 max under Solaris 2.5), using: # ndd -set /dev/tcp tcp_conn_req_max 1000 and now ping times are back into the 0 or 1 msec reported range you'd hope to see from that sort of topology. :-)
Subject: (2.15) Slackware Tips Slackware comes with The Reference Implementation of NNTP as well as INN. However, if you select "INN" it doesn't remove the nntp entry in your /etc/inetd.conf. If the Slackware people aren't sure why INN requires you to remove that line from /etc/inetd.conf, they should get out of the business. (oh, they can complain to tal@plts.org... he wrote this paragraph).
Subject: (2.16) BSDi 2.0 / FreeBSD / NetBSD Paul Vixie <paul@vix.com> wrote that for BSDi 2.0the use of mmap for use with the history file is ok (add -DMMAP to DBZCFLAGS in config.data), but not for active, so set ACT_STYLE to READ. Others write that it is not. Your mileage may vary and depend on how heavily-used your machine is. For NetBSD1.0 and 1.1 one shouldn't use mmap() unless you add the following: *** icd.c.orig Wed Jun 7 15:04:05 1995 --- icd.c Sat Dec 30 16:22:50 1995 *************** *** 369,375 **** ICDwriteactive() { #if defined(ACT_MMAP) ! /* No-op. */ #else --- 369,375 ---- ICDwriteactive() { #if defined(ACT_MMAP) ! msync(ICDactpointer, 0); #else In NetBSD 1.1 the use of -DMMAP is also ok.(after Curt Sampson <curt@portal.ca>) FreeBSD users should use mmap() with caution. There are serious problems with some realeases of the FreeBSD operating system concerning mmap() and the performance without is quite good. With current releases, namely 2.2.1, this seems fixed. Users of 4.4 BSD derived systems should set LSEEKVAL in config.data to ``off_t'' in order to reflect the 64bit long off_t's in those systems. If you have problems with makehistory on BSDi then replace the BSDi sort command with an other one e.g. from the gnu textutils package. It seems that the BSDi one has some problems with 64kB boundaries. BSDi has a default some datasize limits which will let some operations fail. Add the following at the beginning of rc.news (and also of news.daily): limit datasize unlimited limit openfiles 256 limit memoryuse unlimited limit maxproc unlimited If this still fails look at #5.24 (the same applies to FreeBSD). In 2.1, BSDi introduced a bug with wrong spelling of ``february'' somewhere which lets inn fail somehow .. But they also have a patch: Here's the Summary from the fix: This patch fixes a bug in the BSD/OS 2.1 release of the inn programs. A fix that we made between the 2.0 and 2.1 releases introduced a bug that caused innd to incorrectly parse dates. The symptom is that inn programs fail with "437 Bad "Date" header" in the /var/log/news/news file, or that Pnews will fail with "441 Can't parse "Date" header" messages. For FreeBSD 2.1.6 and INN1.5 Vincent Archer <archer@frmug.org> has written a autoconf package, that you can get from <> James will try to incorporate this into the main INN tree. To get it to work: Go to your inn 1.5 source directory, untar, you'll get configure and config/config.data.in. Type ./configure; make; make install :) (well, you might want to check the pathnames and parameters first, or type ./configure --help)
Subject: (2.17) 3Com Router users If you observe strange behavior, like nnrpd locking and not sending some articles to the clients, and if you find no clues about other potential problems, then check your IP layer: some users have observed bugs in the IP implementation of 3Com routers caused TCP sessions lock outs. You have very probably also NFS problems then. Upgrading to the latest PROMs fixes this totally bizarre problem.
Subject: (2.18) NOV problems on a Pyramid This applies only to Pyramid systems that run OSx. Newer systems run DC/OSx and/or Sinix 5.43 which are "normal" SysV that have normal cronjobs. Q: I just turned on the overview stuff and I don't think news.daily is properly expiring the .overview files. I'm using a Pyramid. A: Do you need quotes in your crontab entry? Look at your news.daily report -- expire using "expireover delayrm" should take a few minutes. If it takes longer than, say, 10-20 minutes, then the keywords aren't being seen by news.daily so perhaps the commandline quoting is wrong. i.e. you had: su news -c /usr/lib/news/bin/news.daily delayrm expireover You should have: su news -c "/usr/lib/news/bin/news.daily delayrm expireover" Without quoting, the options are thrown away and only the "news.daily" is executed.
Subject: (2.19) Warnings to people that must set HAVE_UNIX_DOMAIN to DONT Disclaimer: First of all, if you have to set HAVE_UNIX_DOMAIN to DONT, YOU HAVE TO SET IT to DONT. It's not a choice you can make, it's a description of the operating system that you've purchased. If you've wrongly set this variable to DO your system isn't going to work *at* *all*. When you use POST (the NNTP command), you are talking to nnrpd. nnrpd cleans up your headers, adds the missing headers that it is allowed to add, checks whatever it checks, and then submits the finalized version to innd. How does it talk to innd? If you have HAVE_UNIX_DOMAIN set to DO, nnrpd opens a Unix domain socket and sends the text. At this point it is talking to innd somewhat like ctlinnd does. innd can trust that the post isn't forged since it is coming from a program trustworthy enough to get to the socket (which isn't much). If you have HAVE_UNIX_DOMAIN set to DONT, it has no choice but to open a socket to port 119, issue the "IHAVE" command, and send the text that way (just like a remote newsreader). This means that innd (not another nnrpd) has to be at the other end of the pipe. If it opens the connection and sees a "nnrpd" you're hosed and you get "441 480 Transfer permission denied". (Better the "441 480" message than an infinite loop of nnrpd's connecting to nnrpd's!) To get innd to not hand off the connection to a nnrpd process, you must have the host's name in the hosts.nntp file. (don't forget to do "ctlinnd reload hosts.nntp") If you have your host's name in the hosts.nntp file, then any newsreader running on your nntphost must be "INN-aware" (i.e. that they issue the "mode reader" command) or they must read news via the file system instead of NNTP. If you have NNTP-based newsreaders that can't send the "mode reader" command, you can try including "server: localhost" in your inn.conf file, but then you must have a different inn.conf file for the other machines. If you can't do that, you have no other options but to recompile your newsreaders. Remember, if you change your inn.conf file, you must shutdown and restart innd. There is no "ctlinnd reload inn.conf" command. There is a patch which is listed in the unoff3/UNOFF-NOTES that seems to work at least for linux but should theoretically work for other os that have to set HAVE_UNIX_DOMAIN to DONT which resolves the problem that multiple invocations of ctlinnd break.
Subject: (2.20) INN for SNI RM400 There seems to be no working config.data available for that hardware, but you can get a ported version of INN from SNI in the ``NetServe'' package. If anyone has a working configuration and tips how to get there, then mail the FAQ maintainer for inclusion in part9 ..
Subject: (2.21) INN on NeXT-/OpenStep Scott Anguish <sanguish@digifix.com> has made his tips of configuring INN on NeXT-/OpenStep available on -- See <a href="">NetBSD</a> for a multiplatform OS What would you call a BBS run by a mom? A "mother board". | http://www.faqs.org/faqs/usenet/software/inn-faq/part2/ | crawl-002 | en | refinedweb |
First time here? Check out the FAQ!
Steps to be followed to make it run:
1.Install open stack RDO with neutron enabled.
2.Create ifcfg-br-ex interface on compute node/neutron node:
3.If your open stack server (compute node/neutron node) is VM in ESXi set the promiscuous mode on ESX vSwitch.
problem solved..!
As compute node itself is a virtual machine residing on ESX server need to enable promiscuous mode on Esxi virtual switch.
Thanks
following are the links i am following to add network namespace and dkms.Then will lead to devstack installation.
pl let me knw if any other precise way to do this.
Is it because network namespace issue..? As i am using centos 6.4 + devstack setup.And found that it does not support network namespace required for smooth working of open stack neutron service.
@dheeru,
thanks for the replay, by checking above link I see it will be very useful documentation for all who are new to open stack and wants to make their things run.Waiting for it completion.
In mean time plz let me know if any other links can be followed to make this things run.
Thanks
Hi All,
I am using centos 6.4 (virtual machine installed on ESX server) to install open stack (devstack),with single node setup(All in one).
Centos on which open stack is installed have only one NIC card. (eht0).
1. I install neutron service to this setup,
2.created public and private network , and routed them together following instruction given in user guide.
3.added security rules to pas tcp/icmp traffic to the instance, and added keypair to ssh it from outer world.
4. assigned private and public (floating ip) to the open stack instance (cirros).
what i found is, i am unable to ping the instance (with both private and floating ip) from centos on which open stack is installed, neither i can ssh to it as it ends with error message 'no route to the host'
My question is i sit because of single nic card (eth0) present on my centos causing me this issue..?
Is there any workaround for this problem..?
Any feedback will be very helpful.
Thanks..
even i am facing the same issue.
any feedback will be very helpful.
While launching instance on devstack hawana (single node setup) using fedora qcow2 image,it fails to start open ssh server daemon and LSB bring up/down networking.
Allowed tcp/icmp rules in security group still not able to ping private or floating ip.
Any feedback will be helpful.
OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license. | https://ask.openstack.org/en/users/2482/mithun/?sort=recent | CC-MAIN-2019-43 | en | refinedweb |
Do you (re)arrange your class methods? Why?
Ope Adeyomoye
・1 min read
Lots of (open-source) libraries I go through seem to (re)arrange the methods of member classes in a specific manner e.g. such that any class methods that are to be used by other methods within that class are defined first.
For example:
<?php class Dispatcher { public function isDispatchable() { $number = rand(0, 200); return ($number < 100) ? true : false; } public function dispatch() { if ($this->isDispatchable()) { // implementation goes here... } } }
Here,
isDispatchable() is defined before the method that uses it.
I generally just append new methods to the end of the class 🤷ðŸ½, whether or not the guys that use it are above or below. Should I be sent north of the wall? 🙂
What do you do?
dev.to is where senior developers are built. Sign Up Now
(open source and free forever ❤️)
(open source and free forever ❤️)
Classic DEV Post from Jun 27
Jack Of All Trades or Master of One?
That age old question: how many pies should one have a thumb in?
Here's what I do, but not incredibly consistently:
That's for languages like Java that really don't care what order the methods are defined. In languages like C or Javascript that have to have definitions or have hoisting, it's a little more complicated so I usually just define all helper functions first like you have in your example.
There is no absolute truth, except when is forced by the compiler/interpreter.
Make your own rules, based on your project & coworkers. This kind of decisions are planned or made ad-hoc while the project grows.
Usually I saw new code goes to the end of the file. Pro: good for the old devs that know the project, bad: makes no sense for a newcomer.
Based on my preference you should be sent to the North, but so as we :)
I try to put methods working on the same subject near to each other. But I tend to put constructors/init methods at the begining of the file, just after fields and constant.
Sometime I like to have the public interface method first and their implementation later on.
But to me, if a class/file has more than a few methods with some significant code, it start to lack readability and I prefer to refactor.
200-300 lines for a file is great. 500 lines is acceptable. 1000+ is usually bad. That counting imports, comments and all.
There obviously exception. Facades for example shall have all the relevant method to help discoverability and keep things clean. But that doesn't mean the implementation has to do more than just redirecting to other classes that do the actual job.
In some older languages (e.g. C), functions are (or at least were) required to be declared before they can be used, so that became an unofficial (compiler-enforced) convention for a lot of people.
It can also be useful because if you see a function call inside of another function, you know that the definition has to be above the function you saw it in (if there's no other discernable order to function declaration, e.g. alphabetization). | https://dev.to/ope/do-you-rearrange-your-class-methods-why | CC-MAIN-2019-43 | en | refinedweb |
I added the rx-main package to the a WPF Workbook but when I type 'using System.Reactive;', both the autocompletion and the compiler fail to find the namespace. I tried my own NuGet package and worked fine.
What am I missing here? What version of .NET is the WPF Workbook using?
Any idea why it doesn't work?
Submitted issue to
?
Thank you for filing a bug so that this is on our radar.
There are a lot of NuGet packages that don't work correctly yet. See.
Thanks for the reply. Just wanted to contribute with a test scenario and to be sure RxNET was under your radar.
Still doesn't work in 0.9.0
Today a new version of Rx.NET was released supporting .NET Core 1.0. It requires NuGet 2.12 while version supported by 0.9.0 is 2.10.766.
The package is now called System.Reactive 3.0.0.
I tested version 1.0.0.0 released today and I'm happy to say that it can finally add System.Reactive.* packages.
Unfortunately when I run:
I get the following message and nothing happens.
"warning CS4014: Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call."
If I await the observable, I get only the last value. This is the expected behavior for an awaited observable but it would be interesting to not have to await and see all the returned values, just like LinqPad has been doing for a long time.
I got some help from Paul Betts that gave me this solution:
It shows all the values once the observable completes. It works but it would be much nicer if you supported output from IObservable and show the values as they become available.
Glad it works! IObservable support would certainly be a useful feature.
Any news on this? Has IObservable() support been added to any of the recent versions ? | https://forums.xamarin.com/discussion/comment/319174 | CC-MAIN-2019-43 | en | refinedweb |
$ 44.99 US
28.99 UK
"Community
Experience
Distilled"
C o m m u n i t y
Adam Boduch
Flux Architecture
Flux Architecture
D i s t i l l e d
Flux Architecture
Learn to build powerful and scalable applications with Flux,
the architecture that serves billions of Facebook users every day
E x p e r i e n c e
Adam Boduch
nearly 10 years. Before moving to the front end, he worked on several large-scale
cloud computing products using Python and Linux. No stranger to complexity,
Adam has practical experience with real-world software systems and the scaling
challenges they pose.
Preface
I love Backbone.js. It's an amazing little library that does so much with so little.
It's also unopinionatedthere are endless ways to do the same thing. This last
point gives many Backbone.js programmers a headache. The freedom to implement
things the way we see fit is great, until we start making those unavoidable
consistency errors.
When I first started with Flux, I couldn't really see how such an architecture could
help out a mere Backbone.js programmer. Eventually, I figured out two things.
First, Flux is unopinionated where it mattersthe implementation specifics.
Two, Flux is very much like Backbone in the spirit of minimal moving parts that
do one thing well.
As I started experimenting with Flux, I realized that Flux provides the missing
architectural perspective that enables scalability. Where Backbone.js and other
related technologies fall apart is when something goes wrong. In fact, these bugs
can be so difficult that they're never actually fixedthe whole system is scarred
with workarounds.
I decided to write this book in the hope that other programmers, from all walks of
JavaScript, can experience the same level of enlightenment as I have working with
this wonderful technology from Facebook.
Preface
Chapter 4, Creating Actions, shows how action creator functions are used to feed new
data into the system while describing something that just happened.
Chapter 5, Asynchronous Actions, goes through examples of asynchronous action creator
functions and how they fit within a Flux architecture.
Chapter 6, Changing Flux Store State, gives many detailed explanations and examples
that illustrate how Flux stores work.
Chapter 7, Viewing Information, gives many detailed explanations and examples that
illustrate how Flux views work.
Chapter 8, Information Lifecycle, talks about how information in a Flux architecture
enters the system and how it ultimately exits the system.
Chapter 9, Immutable Stores, shows how immutability is a key architectural property
of software architectures, such as Flux, where data flows in one direction.
Chapter 10, Implementing a Dispatcher, walks through the implementation of a
dispatcher component, instead of using the Facebook reference implementation.
Chapter 11, Alternative View Components, shows how view technologies other than
React can be used within a Flux architecture.
Chapter 12, Leveraging Flux Libraries, gives an overview of two popular Flux
librariesAlt.js and Redux.
Chapter 13, Testing and Performance, talks about testing components from within the
context of a Flux architecture and discusses performance testing your architecture.
Chapter 14, Flux and the Software Development Life Cycle, discusses the impact Flux has
on the rest of the software stack and how to package Flux features.
Chapter 1
What chapter chapter by walking through the core components found in
any Flux architecture, and we'll install the Flux npm package and write a hello world
Flux application right away. Let's get started.
[1]
What is Flux?
Controller
Controller
At first glance, there's nothing wrong with this picture. The data-flow, represented
by the arrows, is easy to follow. But where does the data originate? For example, the
view can create new data and pass it to the controller, in response to a user event.
A controller can create new data and pass it to another controller, depending on the
composition of our controller hierarchy. What about the controller in questioncan
it create data itself and then use it?
In a diagram such as this one, these questions don't have much virtue. But, if we're
trying to scale an architecture to have hundreds of these components, the points
at which data enters the system become very important. Since Flux is used to
build architectures that scale, it considers data entry points an important
architectural pattern.
[2]
Chapter 1
Managing state
State is one of those realities we need to cope with in frontend development.
Unfortunately, we can't compose our entire application of pure functions with no
side-effects for two reasons. First, our code needs to interact with the DOM interface,
in one way or another. This is how the user sees changes in the UI. Second, we don't
store all our application data in the DOM (at least we shouldn't do this). As time
passes and the user interacts with the application, this data will change.
There's no cut-and-dry approach to managing state in a web application, but there
are several ways to limit the amount of state changes that can happen, and enforce
how they happen. For example, pure functions don't change the state of anything,
they can only create new data. Here's an example of what this looks like:
Input
Pure Function
Output
As you can see, there's no side-effects with pure functions because no data changes
state as a result of calling them. So why is this a desirable trait, if state changes are
inevitable? The idea is to enforce where state changes happen. For example, perhaps
we only allow certain types of components to change the state of our application
data. This way, we can rule out several sources as the cause of a state change.
Flux is big on controlling where state changes happen. Later on in the chapter,
we'll see how Flux stores manage state changes. What's important about how Flux
manages state is that it's handled at an architectural layer. Contrast this with an
approach that lays out a set of rules that say which component types are allowed
to mutate application datathings get confusing. With Flux, there's less room for
guessing where state changes take place.
[3]
What is Flux?
Let's think about why this matters for a moment. In a system where data is updated
asynchronously, we have to account for race conditions. Race conditions can be
problematic because one piece of data can depend on another, and if they're updated
in the wrong order, we see cascading problems, from one component to another.
Take a look at this diagram, which illustrates this problem:
System
Async Source
Generic Data
Dependent Data
Async Source
Generic Data
Synchronizer
Dependent Data
Async Source
[4]
Chapter 1
Information architecture
It's easy to forget that we work in information technology and that we should be
building technology around information. In recent times, however, we seem to have
moved in the other direction, where we're forced to think about implementation
before we think about information. More often than not, the data exposed by the
sources used by our application doesn't have what the user needs. It's up to our
JavaScript to turn this raw data into something consumable by the user. This is our
information architecture.
Does this mean that Flux is used to design information architectures as opposed to a
software architecture? This isn't the case at all. In fact, Flux components are realized
as true software components that perform actual computations. The trick is that
Flux patterns enable us to think about information architecture as a first-class design
consideration. Rather than having to sift through all sorts of components and their
implementation concerns, we can make sure that we're getting the right information
to the user.
Once our information architecture takes shape, the larger architecture of our
application follows, as a natural extension to the information we're trying to
communicate to our users. Producing information from data is the difficult part.
We have to distill many sources of data into not only information, but information
that's also of value to the user. Getting this wrong is a huge risk for any project.
When we get it right, we can then move on to the specific application components,
like the state of a button widget, and so on.
Flux architectures keep data transformations confined to their stores. A store is
an information factoryraw data goes in and new information comes out. Stores
control how data enters the system, the synchronicity of state changes, and they
define how the state changes. When we go into more depth on stores as we progress
through the book, we'll see how they're the pillars of our information architecture.
What is Flux?
The main problem is that Flux operates at an architectural level. It's used to address
information problems that prevent a given application from scaling to meet user
demand. If Facebook decided to release Flux as yet another JavaScript framework,
it would likely have the same types of implementation issues that plague other
frameworks out there. For example, if some component in a framework isn't
implemented in a way that best suits the project we're working on, then it's not
so easy to implement a better alternative, without hacking the framework to bits.
What's nice about Flux is that Facebook decided to leave the implementation options
on the table. They do provide a few Flux component implementations, but these are
reference implementations. They're functional, but the idea is that they're a starting
point for us to understand the mechanics of how things such as dispatchers are
expected to work. We're free to implement the same Flux architectural pattern as
we see it.
Flux isn't a framework. Does this mean we have to implement everything ourselves?
No, we do not. In fact, developers are implementing Flux libraries and releasing
them as open source projects. Some Flux libraries stick more closely to the Flux
patterns than others. These implementations are opinionated, and there's nothing
wrong with using them if they're a good fit for what we're building. The Flux
patterns aim to solve generic conceptual problems with JavaScript development,
so you'll learn what they are before diving into Flux implementation discussions.
[6]
Chapter 1
Flow?
Data flow is a useful abstraction, because it's easy to visualize data as it enters
the system and moves from one point to another. Eventually, the flow stops.
But before it does, several side-effects happen along the way. It's that middle
block in the preceding diagram that's concerning, because we don't know exactly
how the data-flow reached the end.
Let's say that our architecture doesn't pose any restrictions on data flow.
Any component is allowed to pass data to any other component, regardless
of where that component lives. Let's try to visualize this setup:
Data Flow Start
Component
Component
Component
Component
As you can see, our system has clearly defined entry and exit points for our data.
This is good because it means that we can confidently say that the data-flows through
our system. The problem with this picture is with how the data-flows between the
components of the system. There's no direction, or rather, it's multidirectional. This isn't
a good thing.
[7]
What is Flux?
Flux is a unidirectional data flow architecture. This means that the preceding
component layout isn't possible. The question iswhy does this matter? At times, it
might seem convenient to be able to pass data around in any direction, that is, from
any component to any other component. This in and of itself isn't the issuepassing
data alone doesn't break our architecture. However, when data moves around our
system in more than one direction, there's more opportunity for components to fall
out of sync with one another. This simply means that if data doesn't always move in
the same direction, there's always the possibility of ordering bugs.
Flux enforces the direction of data-flows, and thus eliminates the possibility of
components updating themselves in an order that breaks the system. No matter
what data has just entered the system, it'll always flow through the system in the
same order as any other data, as illustrated here:
Data Flow Start
Component
Component
Component
Component
[8]
Chapter 1
Consistent notifications
The direction in which we pass data from component to component in Flux
architectures should be consistent. In terms of consistency, we also need to think
about the mechanism used to move data around our system.
For example, publish/subscribe (pub/sub) is a popular mechanism used for intercomponent communication. What's neat about this approach is that our components
can communicate with one another, and yet we're able to maintain a level of
decoupling. In fact, this is fairly common in frontend development because component
communication is largely driven by user events. These events can be thought of as
fire-and-forget. Any other components that want to respond to these events in some
way, need to take it upon themselves to subscribe to the particular event.
While pub/sub does have some nice properties, it also poses architectural
challenges, in particular scaling complexities. For example, let's say that we've
just added several new components for a new feature. Well, in which order do
these components receive update messages relative to pre-existing components?
Do they get notified after all the pre-existing components? Should they come first?
This presents a data dependency scaling issue.
The other challenge with pub-sub is that the events that get published are often
fine-grained to the point where we'll want to subscribe and later unsubscribe from
the notifications. This leads to consistency challenges because trying to code lifecycle
changes when there's a large number of components in the system is difficult and
presents opportunities for missed events.
[9]
What is Flux?
The idea with Flux is to sidestep the issue by maintaining a static inter-component
messaging infrastructure that issues notifications to every component. In other
words, programmers don't get to pick and choose the events their components will
subscribe to. Instead, they have to figure out which of the events that are dispatched
to them are relevant, ignoring the rest. Here's a visualization of how Flux dispatches
events to components:
Event
Dispatcher
Event
Component
Event
Component
Event
Component
The Flux dispatcher sends the event to every component; there's no getting around
this. Instead of trying to fiddle with the messaging infrastructure, which is difficult
to scale, we implement logic within the component to determine whether or not the
message is of interest. It's also within the component that we can declare dependencies
on other components, which helps influence the ordering of messages. We'll cover this
in much more detail in later chapters.
Actions
Stores
Views
[ 10 ]
Chapter 1
This diagram isn't intended to capture the entire data flow of a Flux
architecture, just how data-flows between the main three layers. It
also doesn't give any detail about what's in the layers. Don't worry,
the next section gives introductory explanations of the types of Flux
components, and the communication that happens between the
layers is the focus of this book.
As you can see, the data-flows from one layer to the next, in one direction. Flux
only has a few layers, and as our applications scale in terms of component counts,
the layer counts remains fixed. This puts a cap on the complexity involved with
adding new features to an already large application. In addition to constraining
the layer count and the data-flow direction, Flux architectures are strict about
which layers are actually allowed to communicate with one another.
For example, the action layer could communicate with the view layer, and we
would still be moving in one direction. We would still have the layers that Flux
expects. However, skipping a layer like this is prohibited. By ensuring that layers
only communicate with the layer directly beneath it, we can rule out bugs introduced
by doing something out-of-order.
[ 11 ]
What is Flux?
Flux components
In this section, we'll begin our journey into the concepts of Flux. These concepts
are the essential ingredients used in formulating a Flux architecture. While there's
no detailed specifications for how these components should be implemented,
they nevertheless lay the foundation of our implementation. This is a high-level
introduction to the components we'll be implementing throughout this book.
Action
Actions are the verbs of the system. In fact, it's helpful if we derive the name of
an action directly from a sentence. These sentences are typically statements of
functionality something we want the application to do. Here are some examples:
These are simple capabilities of the application, and when we implement them as
part of a Flux architecture, actions are the starting point. These human-readable
action statements often require other new components elsewhere in the system,
but the first step is always an action.
So, what exactly is a Flux action? At it's simplest, an action is nothing more than a
stringa name that helps identify the purpose of the action. More typically, actions
consist of a name and a payload. Don't worry about the payload specifics just yetas
far as actions are concerned, they're just opaque pieces of data being delivered into
the system. Put differently, actions are like mail parcels. The entry point into our
Flux system doesn't care about the internals of the parcel, only that they get to
where they need to go. Here's an illustration of actions entering a Flux system:
Action
Action
Action
Payload
Payload
Payload
Flux
[ 12 ]
Chapter 1
This diagram might give the impression that actions are external to Flux, when in
fact they're an integral part of the system. The reason this perspective is valuable is
because it forces us to think about actions as the only means to deliver new data into
the system.
Golden Flux Rule: If it's not an action, it can't happen.
Dispatcher
The dispatcher in a Flux architecture is responsible for distributing actions to the
store components (we'll talk about stores next). A dispatcher is actually kind of like a
brokerif actions want to deliver new data to a store, they have to talk to the broker,
so it can figure out the best way to deliver them. Think about a message broker in
a system like RabbitMQ. It's the central hub where everything is sent before it's
actually delivered. Here is a diagram depicting a Flux dispatcher receiving actions
and dispatching them to stores:
Dispatcher
Action
Action
Store
Action
Action
Store
[ 13 ]
What is Flux?
Store
Stores are where state is kept in a Flux application. Typically, this means the
application data that's sent to the frontend from the API. However, Flux stores take
this a step further and explicitly model the state of the entire application. If this
sounds confusing or like a generally bad idea, don't worrywe'll clear this up as
we make our way through subsequent chapters. For now, just know that stores are
where state that matters can be found. Other Flux components don't have state
they have implicit state at the code level, but we're not interested in this, from an
architectural point of view.
Actions are the delivery mechanism for new data entering the system. The term
new data doesn't imply that we're simply appending it to some collection in a store.
All data entering the system is new in the sense that it hasn't been dispatched
as an action yetit could in fact result in a store changing state. Let's look at a
visualization of an action that results in a store changing state:
Action
Store
Store
Payload
Current State
New State
The key aspect of how stores change state is that there's no external logic that
determines a state change should happen. It's the store, and only the store, that
makes this decision and then carries out the state transformation. This is all tightly
encapsulated within the store. This means that when we need to reason about
particular information, we need not look any further than the stores. They're their
own bossthey're self-employed.
Golden Flux Rule: Stores are where state lives, and only stores
themselves can change this state.
[ 14 ]
Chapter 1
View
The last Flux component we're going to look at in this section is the view, and it
technically isn't even a part of Flux. At the same time, views are obviously a critical
part of our application. Views are almost universally understood as the part of our
architecture that's responsible for displaying data to the userit's the last stop as
data-flows through our information architecture. For example, in MVC architectures,
views take model data and display it. In this sense, views in a Flux-based application
aren't all that different from MVC views. Where they differ markedly is with regard
to handling events. Let's take a look at the following diagram:
Data
Data
Event
Typical View
Controller
Model
Event
Flux View
View
Action
Here we can see the contrasting responsibilities of a Flux view, compared with
a view component found in your typical MVC architecture. The two view types
have similar types of data flowing into themapplication data used to render the
component and events (often user input). What's different between the two types
of view is what flows out of them.
The typical view doesn't really have any constraints in how its event handler
functions communicate with other components. For example, in response to a user
clicking a button, the view could directly invoke behavior on a controller, change the
state of a model, or it might query the state of another view. On the other hand, the
Flux view can only dispatch new actions. This keeps our single entry point into the
system intact and consistent with other mechanisms that want to change the state of
our store data. In other words, an API response updates state in the exact same way
as a user clicking a button does.
Given that views should be restricted in terms of how data-flows out of them
(besides DOM updates) in a Flux architecture, you would think that views should be
an actual Flux component. This would make sense insofar as making actions the only
possible option for views. However, there's also no reason we can't enforce this now,
with the benefit being that Flux remains entirely focused on creating information
architectures.
[ 15 ]
What is Flux?
Keep in mind, however, that Flux is still in it's infancy. There's no doubt going to
be external influences as more people start adopting Flux. Maybe Flux will have
something to say about views in the future. Until then, views exist outside of Flux
but are constrained by the unidirectional nature of Flux.
Golden Flux Rule: The only way data-flows out of a view is
by dispatching an action.
The first NPM package we'll need installed is Webpack. This is an advanced module
bundler that's well suited for modern JavaScript applications, including Flux-based
applications. We'll want to install this package globally so that the webpack command
gets installed on our system:
npm install webpack -g
With Webpack in place, we can build each of the code examples that ship with this
book. However, our project does require a couple of local NPM packages, and these
can be installed as follows:
npm install flux babel-core babel-loader babel-preset-es2015 --save-dev
The --save-dev option adds these development dependencies to our file, if one
exists. This is just to get startedit isn't necessary to manually install these packages
to run the code examples in this book. The examples you've downloaded already
come with a package.json, so to install the local dependencies, simply run the
following from within the same directory as the package.json file:
npm install
[ 16 ]
Chapter 1
Now the webpack command can be used to build the example. This is the only
example in the first chapter, so it's easy to navigate to within a terminal window and
run the webpack command, which builds the main-bundle.js file. Alternatively,
if you plan on playing with the code, which is obviously encouraged, try running
webpack --watch. This latter form of the command will monitor for file changes to
the files used in the build, and run the build whenever they change.
This is indeed a simple hello world to get us off to a running start, in preparation
for the remainder of the book. We've taken care of all the boilerplate setup tasks by
installing Webpack and its supporting modules. Let's take a look at the code now.
We'll start by looking at the markup that's used.
<!doctype html>
<html>
<head>
<title>Hello Flux</title>
<script src="main-bundle.js" defer></script>
</head>
<body></body>
</html>
Not a lot to it is there? There isn't even content within the body tag. The important
part is the main-bundle.js scriptthis is the code that's built for us by Webpack.
Let's take a look at this code now:
// Imports the "flux" module.
import * as flux from 'flux';
// Creates a new dispatcher instance. "Dispatcher" is
// the only useful construct found in the "flux" module.
const dispatcher = new flux.Dispatcher();
// Registers a callback function, invoked every time
// an action is dispatched.
dispatcher.register((e) => {
var p;
// Determines how to respond to the action. In this case,
// we're simply creating new content using the "payload"
// property. The "type" property determines how we create
// the content.
switch (e.type) {
case 'hello':
p = document.createElement('p');
[ 17 ]
What is Flux?
p.textContent = e.payload;
document.body.appendChild(p);
break;
case 'world':
p = document.createElement('p');
p.textContent = `${e.payload}!`;
p.style.fontWeight = 'bold';
document.body.appendChild(p);
break;
default:
break;
}
});
// Dispatches a "hello" action.
dispatcher.dispatch({
type: 'hello',
payload: 'Hello'
});
// Dispatches a "world" action.
dispatcher.dispatch({
type: 'world',
payload: 'World'
});
As you can see, there's not much to this hello world Flux application. In fact, the only
Flux-specific component this code creates is a dispatcher. It then dispatches
a couple of actions and the handler function that's registered to the store processes
the actions.
Don't worry that there's no stores or views in this example. The idea is that we've
got the basic Flux NPM package installed and ready to go.
[ 18 ]
Chapter 1
Summary
This chapter introduced you to Flux. Specifically, we looked at both what Flux is
and what it isn't. Flux is a set of architectural patterns that, when applied to our
JavaScript application, help with getting the data-flow aspect of our architecture
right. Flux isn't yet another framework used for solving specific implementation
challenges, be it browser quirks or performance gainsthere's a multitude of tools
already available for these purposes. Perhaps the most important defining aspect
of Flux are the conceptual problems it solvesthings like unidirectional data flow.
This is a major reason that there's no de facto Flux implementation.
We wrapped the chapter up by walking through the setup of our build components
used throughout the book. To test that the packages are all in place, we created a
very basic hello world Flux application.
Now that we have a handle on what Flux is, it's time for us to look at why Flux is the
way it is. In the following chapter, we'll take a more detailed look at the principles
that drive the design of Flux applications.
[ 19 ]
Stay Connected: | https://id.scribd.com/document/313120771/Flux-Architecture-Sample-Chapter | CC-MAIN-2019-43 | en | refinedweb |
From: Ed Brey (brey_at_[hidden])
Date: 2000-08-30 14:20:43
"any" looks like a very useful class. Some comments:
Design:
Why is there not a non-const version of to_ptr? I.e.
template<ValueType>
ValueType* to_ptr();
How about a cast member function to complement the any_cast global:
class any { ...
template<ValueType>
ValueType cast() const;
};
This would allow a convenience such as:
pass_me_int(a.cast<int>());
without declaring the variable that copy_to would require. Using any_cast
in this sense would be redendant, since the fact that the source is of type
any shows up in both the variable name "a" and the name "any_cast".
You almost could get rid of any_cast altogether by doing
int i = any("3").cast<int>();
instead of
int i = any_cast<int>("3");
except I do like how the bottom one looks. I'm not entirely sure where
any_cast would be used versus other casts. I assume it comes into play once
people start defining any-compatible conversions on their classes.
Implementation:
I did some hacking to get any to work on VC6SP4. Here's what I did:
1. Add a dummy parameter on to_ptr, so it looks like:
template<typename ValueType>
const ValueType *to_ptr(const ValueType&) const
Likewise change calls to to_ptr<ValueType>() to to_ptr(ValueType()).
2. Replaced the copy constructor with a specialization, to avoid the bogus
ambiguity. Reordered constructors accordingly.
template<>
any(const any &other)
: content(other.content ? other.content->clone() : 0)
{
}
3. Replaced operator=(const any&) with a specialized version, which
reimplements the constructor functionality instead of calling swap.
template<>
any &operator=(const any &rhs)
{
delete content;
content = rhs.content ? rhs.content->clone() : 0;
return *this;
}
4. Pulled type_info into std, i.e.
namespace std {using ::type_info;}
I don't know why I had to do that since the STLport code looks like it is
doing it for me. I haven't tried looking at the preprocessed output, which
would probably tell the story.
Given these changes, and changes to the use of to_ptr in your test harness,
you test harness compiles, runs, and reports success. Of course all these
hacks would only be for the case of VC. The only exception is that if we
can't find a better to_ptr workaround, we'd want to provide a compatibility
overload of to_ptr, a la:
#ifndef PLEASE_FIX_YOUR_COMPILER_BILL
template<typename ValueType>
const ValueType *to_ptr() const
{
return type_info() == typeid(ValueType)
? &static_cast<holder<ValueType> *>(content)->held
: 0;
}
// Depricated from birth: This exists to allow code that
// has the VC to_ptr() workaround to also work with
// conforming compilers.
template<typename ValueType>
const ValueType *to_ptr(const ValueType&) const
{
return to_ptr<ValueType>();
}
#else // VC
// Work-around for VC. Not part of "official" interface.
template<typename ValueType>
const ValueType *to_ptr(const ValueType&) const
// Dummy parameter to help VC.
{
return type_info() == typeid(ValueType)
? &static_cast<holder<ValueType> *>(content)->held
: 0;
}
#endif
Another choice is to have to_ptr provide the pointer via a reference
parameter, like copy_to does with the value; however, I'm wary of that idea
since it starts bending our design to appease a broken compiler, which I do
not like.
Documentation:
In the section "ValueType requirements", there is an extra "s" at the of
"Another distinguishing features".
I'd prefer to avoid using coding styles in documentation that cause problems
in the real world. In particular, I'd like to see the explicit std:: and
boost:: scoping used. Also, in count_all(), instead of further engraining
std::endl, perhaps something like:
cout <<
"#empty == " << count_if(values.begin(), values.end(), is_empty) << "\n"
"#int == " << count_if(values.begin(), values.end(), is_int) << "\n"
"#const char * == " << count_if(values.begin(), values.end(),
is_char_ptr) << "\n"
"#string == " << count_if(values.begin(), values.end(), is_string) <<
'\n';
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/08/4841.php | CC-MAIN-2019-43 | en | refinedweb |
Introduction
This tutorial is the fifth part of our series on test-driving an Ember.js application. In this series, we have been building a complete application using Ember.js and Ruby on Rails. The premise of our application is a digital bookcase, where we can keep track of all the books we own.
If you’ve finished the fourth tutorial in the series, you’re left with a working book list, but there’s no way to manage your books. In this tutorial, we’ll correct that. We’ll talk about building the necessary CRUD for our application, which will be test-driven. CRUD stands for “Create”, “Read”, “Update”, and “Delete.” Those are the operations we need to fully manage our books.
Upgrading Ember CLI and Friends
We did this once before, in the third tutorial, but time has gone by and the Ember team has been releasing new versions — they release on a six week release cycle. If this is too fast for you or your organization, you’ll be happy to know that they have announced Ember LTS (long-term support). We won’t go into all the details here, but basically it means that the LTS version of Ember will be supported for roughly six months. Version 2.4 is slated to be the first LTS release, and as of this tutorial, we’ll upgrade to the latest version — v2.4.3. Since we went over the step-by-step instructions in the third tutorial, you can visit the release notes which go over the steps. That said, we’ll still look at the things you need to update, so you can easily move through the
ember init step.
- Overwrite README.md? – Yes,
- Overwrite app/app.js? – Yes,
- Overwrite app/index.html? – No,
- Overwrite app/router.js – No,
- Overwrite app/templates/application.hbs? – No,
- Overwrite bower.json? – No, your file should look as follows:
{ "name": "bookcase", "dependencies": { "ember": "~2.4.3", "ember-cli-shims": "0.1.1", "ember-cli-test-loader": "0.2.2", "ember-qunit-notifications": "0.1.0", "jquery-mockjax": "2.0.1" } }
- Overwrite package.json? – No, your
devDependenciesin the file should look as follows:
{ ... "devDependencies": { "broccoli-asset-rev": "^2.4.2", "ember-ajax": "0.7.1", "ember-cli": "2.4.3", "ember-cli-app-version": "^1.0.0", "ember-cli-babel": "^5.1.6", "ember-cli-dependency-checker": "^1.2.0", "ember-cli-htmlbars": "^1.0.3", "ember-cli-htmlbars-inline-precompile": "^0.3.1", "ember-cli-inject-live-reload": "^1.4.0", "ember-cli-qunit": "^1.4.0", "ember-cli-release": "0.2.8", "ember-cli-sri": "^2.1.0", "ember-cli-uglify": "^1.2.0", "ember-data": "^2.4.2", "ember-data-factory-guy": "2.1.3", "ember-export-application-global": "^1.0.5", "ember-load-initializers": "^0.5.1", "ember-resolver": "^2.0.3", "ember-validations": "2.0.0-alpha.4", "loader.js": "^4.0.1" } }
- Overwrite tests/helpers/module-for-acceptance.js? – No, your
afterEachmethod should look like the following:
afterEach() { TestHelper.teardown(); if (options.afterEach) { options.afterEach.apply(this, arguments); } destroyApp(this.application); }
- Overwrite tests/helpers/resolver.js? – Yes.
Now, Ember-CLI will do its thing and update your project. When it’s done, run the following command:
rm testem.json
Back in v2.4.0, the change from testem.json to testem.js was implemented.
Run your tests to make sure everything checks out. If it does, go ahead and check in your changes to source control.
Upgrading ember-data-factory-guy
Since we’ve just updated all our dependencies, especially Ember-Data, let’s go ahead and upgrade our test helper —
ember-data-factory-guy. Open up
package.json and remove
ember-data-factory-guy, then in your terminal, run the command
npm prune. Finally, run
ember install ember-data-factory-guy to get the latest.
Deprecation Workflow
When you ran the tests, you probably got a lot of deprecations. These can just fill up our output window, hiding important errors, so let’s handle these. There is a deprecation workflow that we can follow. First, we’ll install an Ember-CLI add on:
ember install ember-cli-deprecation-workflow
Now, run your tests again in server mode. You can use the shortcut
ember t -s to do this. Once they’re completed, run the command
deprecationWorkflow.flushDeprecations() in the Chrome console. This will dump out the deprecations. Copy these from the output, minus the quotes, to a new file
config/deprecation-workflow.js:
window.deprecationWorkflow = window.deprecationWorkflow || {}; window.deprecationWorkflow.config = { workflow: [ { handler: "silence", matchMessage: "handleFindAll - has been deprecated. Use mockFindAll method instead`" }, { handler: "silence", matchMessage: "Using the injected container is deprecated. Please use the getOwner helper instead to access the owner of this object." } ] };
We’ll get rid of the first deprecation in a minute, it’s an
ember-data-factory-guy change that we’ll fix in our tests. You can go ahead and delete that line.
If you change the remaining
silence to
throw, you’ll see that these deprecations are coming from ember-validations. There is a fix in place for these, but there hasn’t been any new releases. For now, we’ll just silence the deprecations to keep our console clean.
We’ll now deal with the
ember-data-factory-guy cleanup. Do a global find/replace on the term
handleFindAll and change it to
mockFindAll. That’s it, no more deprecation notices.
CRUD Operations
Now that we completed some maintenance on our project, let’s get into the working with our CRUD operations. Since it’s the easiest and the fact that we’re partially done with it, we’ll start with the Read operations. Back in the last tutorial, we presented a list of books. That is one version of a Read operation, but we also want to have a book “details” screen as well. Something that shows the full details of the book.
Read: Book Details
Let’s start with a test. This is a good place for an acceptance test since we’ll be viewing the page through the user’s eyes. If you recall, we can generate an acceptance test through Ember-CLI:
ember g acceptance-test book-details
Now go to
/tests/acceptance/book-details-test.js. Our first step will be to change the initial test. We don’t have, nor want a
/book-details URL. Instead, we’ll have
/books/:id, where
:id will be the ID of the book. Our test file will look like this:
test('visiting /books/1', function(assert) { visit('/books/1'); andThen(function() { assert.equal(currentURL(), '/books/1'); }); });
You can use the shortcut
ember t -s to run the test server, and assuming you started your test server you’ll get a failure:
Error: Assertion Failed: The URL '/books/1' did not match any routes in your application. We’ll fix that. Open up
app/router.js and let’s add the route to the
Router.map:
Router.map(function() { this.route('books'); this.route('book', { path: '/books/:id'}); });
Now our test passes, but it just checks the current URL. What we really want is to test the elements on the page for the correct information about a book, so let’s write another test. We’re going to need two new factories. Add a new file for
author.js and
publisher.js into
tests/factories, and we’ll see how the author and the publisher factories should look like.
The author:
import FactoryGuy from 'ember-data-factory-guy'; FactoryGuy.define('author', { default: { name: 'Damien White' } });
The publisher:
import FactoryGuy from 'ember-data-factory-guy'; FactoryGuy.define('publisher', { default: { name: 'Acme, Inc.' } });
We’ll make use of these factories in our book details test:
test('should show all of the book\'s information', function(assert){ let publisher = make('publisher', {name: 'Acme, Inc.'}); let author = make('author', {name: 'Damien White'}); mockFind('book', { id: 1, title: 'Developing Foo', isbn: '0123456789', publisher: publisher, authors: [author] }); visit('/books/1'); andThen(function() { assert.equal(find('.title').text(), 'Developing Foo'); assert.equal(find('.isbn').text(), '0123456789'); assert.equal(find('.publisher').text(), 'Acme, Inc.'); assert.equal(find('.author').text(), 'Damien White'); }); });
Now, we have four failures. Let’s fix this by using Ember-CLI to generate a route for us:
ember g route book
Our first step in fixing the error is to load our book model in the route. In our previous tutorials, we used the route’s
model hook. Your
app/routes/book.js should look like the following:
import Ember from 'ember'; export default Ember.Route.extend({ model: function(params) { return this.store.findRecord('book', params.id); } });
With that change, we now have two failing tests. The first is our original test where we are checking URL. To fix this, we just have to mock the
findRecord call. We’ll add one line to the beginning of the first test:
mockFind('book', { id: 1 });
We’ll now focus on the
app/templates/book.hbs template. We’ll add the data points we are looking for in this file:
<div class="container"> <img src={{model.cover}} alt={{model.title}} /> <h1 class="title">{{model.title}}</h1> <h3 class="publisher">{{model.publisher.name}}</h3> <h4 class="isbn">{{model.isbn}}</h4> <ul> {{#each model.authors as |author|}} <li class="author">{{author.name}}</li> {{/each}} </ul> </div>
Now, our test is passing, but if you run the application against the API we have, you’ll find it isn’t working quite right. We need to tweak our API. Since the related data is small, we’ll simply include the authors and the publisher along with the book if the user is requesting the
show route. Remember, we’re in the Rails project now, not the Ember project.
It requires two changes, first to the
app/controllers/books_controller.rb:
# GET /books def index @books = Book.includes(:publisher, :authors).all render json: @books, include: %w(publisher authors) end # GET /books/1 def show render json: @book, include: %w(publisher authors) end
This tells
ActiveRecord to include the publisher and the author’s details, so that we don’t have an N+1 query.
We modified both the
index and
show actions because we’ll change the
book_serializer, which will affect both actions. Now, we just need to actually alter the
app/serializers/book_serializer.rb to look like the following:
class BookSerializer < ActiveModel::Serializer attributes :id, :title, :isbn, :cover belongs_to :publisher has_many :authors end
That’s all that was needed on the Rails side of things.
We can now tell Ember-Data not to asynchronously load the relationships —
app/models/book.js:
publisher: DS.belongsTo('publisher', { async: false }), authors: DS.hasMany('author', { async: false })
Now, with the loading logic out of the way, we’ll link the
books route with the
book detail route that we’ve just created. This way we’ll be able to click on a book in the book list and get all the book’s details. This simply added a
link-to around our book cover that we are displaying in
app/templates/components/book-list.hbs:
{{#each filteredBooks as |book|}} <div class="book" data- {{#link-to "book" book}}<img src="{{book.cover}}" height="160" />{{/link-to}} </div> {{/each}}
Notice that we are passing the
book model to the
link-to helper. By doing this, we won’t have to query the API again because the model is already fully filled, thanks to our API’s
index action. If a user hits the book detail page directly, then it will call the
show action of our API.
Create: Add Book
Now that our read operations are out of the way, let’s finally give our users the ability to add a book. Again, we’ll start with an acceptance test:
ember g acceptance-test book-new
Our first change/test is the route. We want RESTful routes on our front-end, so we’ll start by changing the default test that was generated for us.
test('visiting /book/new', function(assert) { visit('/books/new'); andThen(function() { assert.equal(currentURL(), '/books/new'); }); });
The solution is to add the new route to
app/router.js:
Router.map(function() { this.route('books'); this.route('new-book', { path: '/books/new' }); this.route('book', { path: '/books/:id'}); });
Routes are considered “greedy”, matching from top to the bottom, so if we put the
new-book route below the
book route, the
book route would be matched first causing us errors.
In our acceptance test, we’ll write a test for fully adding a record. This will test all the form elements on the screen, and we’ll even mock saving the record using
ember-data-factory-guy. Let’s look at our new test:
import { mockCreate } from 'ember-data-factory-guy'; ... test('can be created', function(assert){ mockCreate('book'); visit('/books/new'); andThen(function() { fillIn('.title', 'Ember is Awesome'); fillIn('.isbn', '0123456789'); fillIn('.cover', ''); }); andThen(function(){ click('button[type=submit]'); }); andThen(function(){ assert.equal($.mockjax.mockedAjaxCalls()[0].url, '/books'); assert.equal(currentURL(), '/books/1'); }); });
This is very straight-forward, except for the first assertion. Here we’re inspecting Mockjax’s collection of mocked Ajax calls to ensure that there was a call out, a
POST, to
/books. The
ember-data-factory-guy, uses jquery-mockjax to intercept the calls out to the server. Pay attention to the first line of the test, here we’ll use the function
mockCreate to have FactoryGuy intercept the call out to the server.
We’ll now fix the broken test. Run the following command to generate a
new-book route:
ember g route new-book
That created a
route, a
template, and a
testfor us. We’ll first focus on the template, which should look as follows:
<div class="container"> <form> <div class="form-group"> <label>Title</label> {{input value=model.title <label>ISBN</label> {{input value=model.isbn <label>Cover</label> {{input value=model.cover class="form-control cover"}} </div> <button type="submit" class="btn btn-primary" {{action 'save' model}}>Submit</button> </form> </div>
Finally, we’ll need some code in our route file —
app/routes/new-book.js:
import Ember from 'ember'; export default Ember.Route.extend({ model: function() { return this.store.createRecord('book'); }, actions: { save: function(model) { model.save() .then((book) => { this.transitionTo('book', book); }) .catch(function(error) { console.log(error); }); } } });
That’s all the code required for our test to pass. Though, if you try out the actual site, you’ll find that you can’t add a book, and you get an error. In our Rails backend, a
book
belongs_to a
publisher. In order to get the form to work we need a drop-down list of publishers. There are many ways to tackle a drop-down in Ember, but we’ll use Ember Power Select. This is an Ember-CLI add-on, thus we need to install it:
ember install ember-power-select
After installation, ember-power-select may have included an
app.scss file in your
styles directory. Since we aren’t using Sass in this project, you can safely delete this file.
Now, let’s go ahead and add the power-select to our project. In the
app/templates/new-book.hbs file, we’ll add the following:
... <div class="form-group"> <label>Publisher</label> {{#power-select searchEnabled=false selected=model.book.publisher options=model.publishers onchange=(action (mut model.book.publisher)) as |publisher| }} {{publisher.name}} {{/power-select}} </div> ...
You’ll notice that we’re now going after
model.book.publisher instead of just
model.publisher. We needed this change because we’re going to alter the route to pull in two models when it loads, instead of just one that we have now. We need the
book model and the
publishers that we can choose from. This change occurs in
/app/routes/new-book.js:
export default Ember.Route.extend({ model: function() { return new Ember.RSVP.hash({ book: this.store.createRecord('book'), publishers: this.store.findAll('publisher') }); } ... })
We’re using an
Ember.RSVP.hash, which will wait to resolve until both promises are fulfilled. This way we’ll have publishers we can choose from when we’re on the form. Now make sure you alter the rest of the fields in the new-book form to be
model.book.<attribute>, otherwise things won’t bind correctly. For example:
<div class="form-group"> <label>Title</label> {{input value=model.book.title class="form-control title"}} </div>
With those changes in place, you now should be able to add a new
book.
Update: Updating a Book
Updating a book is very similar to creating a new book. Because of this, we should create a component for our
book form. The only difference is going to be in the route. Note that you could also use a partial for this purpose, but components are a better option as they give us more power. Let’s create a new component, we’ll call it “book-form.”
ember g component book-form
We’ll take the contents of
app/templates/new-book.hbs and and copy it into
book-form.hbs. Your file should look like this:
<div class="container"> <form> <div class="form-group"> <label>Title</label> {{input value=book.title <label>ISBN</label> {{input value=book.isbn <label>Cover</label> {{input value=book.cover <label>Publisher</label> {{#power-select searchEnabled=false selected=book.publisher options=publishers onchange=(action (mut book.publisher)) as |publisher| }} {{publisher.name}} {{/power-select}} </div> <button type="submit" class="btn btn-primary" {{action 'save' book}}>Submit</button> </form> </div>
Notice that the model is just
book now, instead of
model.book, and
publishers will just be a collection on the component. With this in place, we can change
/app/templates/new-book.hbs to be:
{{book-form book=model.book publishers=model.publishers}}
The only thing we need to alter now is the save action. We’re using the route to handle the save and want to continue to do so. Now that we have a component, we’ll need to use a closure action to do the save. Currently, with Ember, we aren’t able to bubble a closure action to a route. However, with the handy ember-route-action helper from Dockyard we’re able to do what we want. We’ll install this like every other Ember add on:
ember install ember-route-action-helper
Now, let’s utilize the helper in our
app/templates/new-book.hbs code:
{{book-form book=model.book publishers=model.publishers save=(route-action 'save')}}
When routable components land, you should be able to find/replace
route-action with
action.
With that in place, let’s actually update a book. We’ll begin with a test. However, if you have been running your tests, you are probably having issues. This is because we introduced the publisher dropdown and we filled it from
/publishers in the API. We need to mock this call to
/publishers using
ember-data-factory-guy. In addition, we’ll use ember-power-select’s acceptance test helpers. We’ll run through these changes rather quickly, so we can move on to updating our books.
Within
tests/acceptance/book-new-test.js, we’ll first mock our
findAll call using
ember-data-factory-guy, and change the import at the top of the file to look as follows:
import { mockCreate, mockFindAll } from 'ember-data-factory-guy';
Now, we can make use of
mockFindAll in both tests that we have, it’s just one line at the beginning of the test:
mockFindAll('publisher', 2);
This one JavaScript line will create a collection of 2 publishers. Again, it’s needed for both tests. Note, you could also do this in a
beforeEach function if you want to.
Next up, let’s add ember-power-select’s helper methods, which involves two things. First is changing
tests/helpers/start-app.js to add in the helpers per the documentation. You’ll notice in the documentation that we need to add two lines, both towards the top of the file outside of the
startApp function:
import registerPowerSelectHelpers from '../../tests/helpers/ember-power-select'; registerPowerSelectHelpers();
Finally, we’re going to use ember-power-select’s
selectChoose method. Since JSHint doesn’t know anything about this method, we’ll modify the
tests/.jshintrc file, and add the following entry under the
predef array:
"selectChoose"
With these changes, we can now use
selectChoose to pick a publisher during the second test. This isn’t 100% necessary in this test, since we’re mocking the create and don’t have any validations in place, but it’s a good idea to include it and make the test complete.
While we are tweaking things, we’ll make a slight change to the publisher to achieve unique names from the factory —
tests/factories/publisher.js:
FactoryGuy.define('publisher', { sequences: { publisherName: function(num) { return 'Publisher ' + num; } }, default: { name: FactoryGuy.generate('publisherName'), } });
Now, we’ll use our publisher select. It’s just one line in
tests/acceptance/book-new-test.js‘s second test:
selectChoose('.publisher', 'Publisher 2');
One last thing, in the second test, we need to change the
$.mockjax.mockedAjaxCalls()[0].url to be
$.mockjax.mockedAjaxCalls()[1].url because the first
mockedAjaxCalls() URL is
/publishers.
All our tests are passing again. It’s important to always have the Ember test server running in the background, so you can catch these issues quickly.
Back to updating a book. Add a new acceptance test for updating a book. In the command line, we can do this using Ember-CLI:
ember g acceptance-test book-update
Here is our full acceptance test for updating a book:
import { test } from 'qunit'; import moduleForAcceptance from 'bookcase/tests/helpers/module-for-acceptance'; import { make, mockFind, mockUpdate, mockFindAll } from 'ember-data-factory-guy'; moduleForAcceptance('Acceptance | book update'); test('visiting /book/1/update', function(assert) { mockFindAll('publisher', 2); mockFind('book', { id: 1 }); visit('/books/1/update'); andThen(function() { assert.equal(currentURL(), '/books/1/update'); }); }); test('can be updated', function(assert){ mockFindAll('publisher', 2); let book = make('book', { id: 1 }); mockFind('book', book); mockUpdate(book); visit('/books/1/update'); andThen(function() { fillIn('.title', 'Ember is Awesome'); fillIn('.isbn', '0123456789'); selectChoose('.publisher', 'Publisher 2'); fillIn('.cover', ''); }); andThen(function(){ click('button[type=submit]'); }); andThen(function(){ assert.equal($.mockjax.mockedAjaxCalls()[1].url, '/books/1'); assert.equal(currentURL(), '/books/1'); }); });
In order to get these tests to pass, we need to add a new route:
ember g route update-book
In
app/route.js, we want the update-book route to look like the following:
Router.map(function() { ... this.route('update-book', { path: '/books/:id/update' }); });
Now, in
app/routes/update-book.js, we’ll have code very similar to that for our new-book route:
import Ember from 'ember'; export default Ember.Route.extend({ model: function(params) { return new Ember.RSVP.hash({ book: this.store.findRecord('book', params.id), publishers: this.store.findAll('publisher') }); }, actions: { save: function(model) { model.save() .then((book) => { this.transitionTo('book', book); }) .catch(function(error) { console.log(error); }); } } });
Finally, in
app/templates/update-book.hbs, we’ll utilize our book-form component like we did earlier for the new-book template:
{{book-form book=model.book publishers=model.publishers save=(route-action 'save')}}
With this component we’ve made our
update code.
Delete: Deleting a Book
We’ve reached the last CRUD operation — delete/destroy. We’ll again start with an acceptance test:
ember g acceptance-test book-delete
The contents of the file contain one test:
import { test } from 'qunit'; import moduleForAcceptance from 'bookcase/tests/helpers/module-for-acceptance'; import { make, mockFind, mockDelete, mockFindAll } from 'ember-data-factory-guy'; moduleForAcceptance('Acceptance | book delete'); test('can be deleted', function(assert){ let book = make('book', { id: 1 }); mockFind('book', book); mockDelete('book', 1); mockFindAll('book'); visit('/books/1'); andThen(function() { click('.btn-danger'); }); andThen(function(){ assert.equal($.mockjax.mockedAjaxCalls()[1].url, '/books/1'); assert.equal($.mockjax.mockedAjaxCalls()[1].type, 'DELETE'); assert.equal(currentURL(), '/books'); }); });
To make our test pass, we’ll first add a button to the “show” page,
app/templates/book.hbs:
<div class="container"> ... <button class="btn btn-danger" {{action "delete" model}}>Delete</button> </div>
Then, we’ll add an action to route (
app/routes/book.js) to actually do the delete:
export default Ember.Route.extend({ ... actions: { delete(book) { book.destroyRecord() .then(() => { this.transitionTo('books'); }) .catch(function(error) { console.log(error); }); } } });
That’s all there is to it. Of course, you’ll probably want to confirm with the user that they want to delete the record, as right now it just deletes as soon as you click the button.
Conclusion
CRUD is a very important part of most applications and, with this tutorial under your belt, you should be able to test these operations in all your applications. In this tutorial, we saw how we could leverage add ons like
ember-data-factory-guy to make our tests simple and easy to understand.
Same as with the other tutorials in the series, this code can also be found on GitHub. The Rails respository has been tagged with
Part5End, as has the Ember repo. We hope you’ll find this tutorial useful. Feel free to share it and post your comments and questions below. | https://semaphoreci.com/community/tutorials/test-driving-ember-js-crud-operations | CC-MAIN-2019-43 | en | refinedweb |
Check this out: I'm going to turn off my Wifi! Gasp! What do you think will happen? I mean, other than I'm gonna miss all my Tweets and Instagrams! What will happen when I refresh? The page will load, but all the images will be broken, right?
In the name of science, I command us to try it!
Woh! An error!?
Error executing ListObjects on ... Could not contact DNS servers.
What? Why is our Symfony app trying to connect to S3?
Here's the deal: on every request... for every thumbnail image that will be rendered, our Symfony app makes an API request to S3 to figure out if the image has already been thumbnailed or if it still needs to be. Specifically, LiipImagineBundle is doing this.
This bundle has two key concepts: the resolver and the loader. But there are actually three things that happen behind the scenes. First, every single time that we use
|imagine_filter(), the resolver takes in that path and has to ask:
Has this image already been thumbnailed?
And if you think about it, the only way for the resolver to figure this out is by making an API request to S3 to ask:
Yo S3! Does this thumbnail file already exist?
If it does exist, LiipImagineBundle renders a URL that points directly to that image on S3. If not, it renders a URL to the Symfony route and controller that will use the loader to download the file and the resolver to save it back to S3.
Phew! The point is: on page load, our app is making one request to S3 per thumbnail file that the page renders. Those network requests are super wasteful!
What's the solution? Cache it! Go back to OneupFlysystemBundle and find the main page of their docs. Oh! Apparently I need Wifi for that! There we go. Go back to their docs homepage and search for "cache". You'll eventually find a link about "Caching your filesystem".
This is a super neat feature of Flysystem where you can say:
Hey Flysystem! When you check some file metadata, like whether or not a file exists, cache that so that we don't need to ask S3 every time!
Actually, it's even more interesting & useful. LiipImagineBundle calls the
exists() method on the
Filesystem object to see if the thumbnail file already exists. If that returns false, the cached filesystem does not cache that. But if it returns true, it does cache it. The result is this: the first time LiipImagineBundle asks if a thumbnail image exists, Flysystem will return false, and Liip will know to generate it. The second time it asks, because the "false" value wasn't cached, Flysystem will still talk to S3, which will now say:
Yea! That file does exist.
And because the cached adapter does cache this, the third time LiipImagineBundle calls
exists, Flysystem will immediately return
true without talking to S3.
To get this rocking, copy the composer require line, find your terminal and paste to download this "cached" Flysystem adapter.
composer require league/flysystem-cached-adapter
While we're waiting, go check out the docs. Here's the "gist" of how this works, it's 3 parts. First, you have some existing filesystem - like
my_filesystem. Second, via this
cache key, you register a new "cached" adapter and tell it how you want things to be cached. And third, you tell your existing filesystem to process its logic through that cached adapter. If that doesn't totally make sense yet, no worries.
For how you want the cached adapter to cache things, there are a bunch of options. We're going to use the one called PSR6. You may or may not already know that Symfony has a wonderful cache system built right into it. Anytime you need to cache anything, you can just use it!
Start by going to
config/packages/cache.yaml. This is where you can configure anything related to Symfony's cache system, and we talked a bit about it in our Symfony Fundamentals course. The
app key determines how the
cache.app service caches things, which is a general-purpose cache service you can use for anything, including this! Or, to be fancier - I like being fancy - you can create a cache "pool" based on this.
Check it out. Uncomment
pools and create a new cache pool below this called
cache.flysystem.psr6. The name can be anything. Below, set
adapter to
cache.app.
That's it! This creates a new cache service called
cache.flysystem.psr6 that, really... just uses
cache.app behind the scenes to cache everything. The advantage is that this new service will automatically use a cache "namespace" so that its keys won't collide with other keys from other parts of your app that also use
cache.app.
In your terminal, run:
php bin/console debug:container psr6
There it is! A new fancy
cache.flysystem.psr6 service.
Back in
oneup_flysystem.yaml, let's use this! On top... though it doesn't matter where, add
cache: and put one new cached adapter below it:
psr6_app_cache. The name here also doesn't matter - but we'll reference it in a minute.
And below that add
psr6:. That exact key is the important part: it tells the bundle that we're going to pass it a PSR6-style caching object that the adapter should use internally. Finally, set
service to what we created in
cache.yaml:
cache.flysystem.psr6.
At this point, we have a new Flysystem cache adapter... but nobody is using it. To fix that, duplicate
uploads_filesystem and create a second one called
cached_uploads_filesystem. Make it use the same adapter as before, but with an extra key:
cache: set to the adapter name we used above:
psr6_app_cache.
Thanks to this, all Filesystem calls will first go through the cached adapter. If something is cached, it will return it immediately. Everything else will get forwarded to the S3 adapter and work like normal. This is classic object decoration.
After all of this work, we should have one new service in the container. Run:
php bin/console debug:container cached_uploads
There it is:
oneup_flysystem.cached_uploads_filesystem_filesystem. Finally, go back to
liip_imagine.yaml. For the loader, we don't really need caching: this downloads the source file, which should only happen one time anyways. Let's leave it.
But for the resolver, we do want to cache this. Add the
cached_ to the service id. The resolver is responsible for checking if the thumbnail file exists - something we do want to cache - and for saving the cached file. But, "save" operations are never cached - so it won't affect that.
Let's try this! Refresh the page. Ok, everything seems to work fine. Now, check your tweets, like some Instagram photos, then turn off your Wifi again. Moment of truth: do a force refresh to fully make sure we're reloading. Awesome! Yea, the page looks terrible - a bunch of things fail. But our server did not fail: we are no longer talking to S3 on every request. Big win.
Next, let's use a super cool feature of S3 - signed URLs - to see an alternate way of allowing users to download private files, which, for large stuff, is more performant. | https://symfonycasts.com/screencast/symfony-uploads/cached-s3-filesystem | CC-MAIN-2019-43 | en | refinedweb |
hash.; }
How data is stored in HashMap
First of all the Node array size is always 2^N. Following method guarantees it -
static final int tableSizeFor(int cap) { int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; }
So lets say you provide initial capacity as 5
cap = 5 n = cap - 1 = 4 = 0 1 0 0 n |= n >>> 1; 0 1 0 0 | 0 0 1 0 = 0 1 1 0 = 6 n |= n >>> 2; 0 0 1 1 | 0 1 1 0 = 0 1 1 1 = 7 n |= n >>> 4; 0 0 0 0 | 0 1 1 1 = 0 1 1 1 = 7 n |= n >>> 8; 0 0 0 0 | 0 1 1 1 = 0 1 1 1 = 7 n |= n >>> 16; 0 0 0 0 | 0 1 1 1 = 0 1 1 1 = 7 return n + 1 7 + 1 = 8
So table size is 8 = 2^3
Now possible index values you can put your element in map are 0-7 since table size is 8. Now lets look at put method. It looks for bucket index as follows -
Node<K,V> p = tab[i = (n - 1) & hash];
where n is the array size. So n = 8. It is same as saying
Node p = tab[i = hash % n];
So all we need to see now is how
hash % n == (n - 1) & hash
Lets again take an example. Lets say hash of a value is 10.
hash = 10 hash % n = 10 % 8 = 2 (n - 1) & hash = 7 & 10 = 0 1 1 1 & 1 0 1 0 = 0 0 1 0 = 2
So it's a optimized modulo operation with bitwise & operator.; }
Understanding LinkedHashMap
LinkedHashMap as we know stores the insertion order. However it also extends HashMap so it respects O(1) time complexity as well for insertion. So how does it really work.
LinkedHashMap maintains another type on Entry which holds pointer to previous as well as next Entry Node.
static class Entry<K,V> extends HashMap.Node<K,V> { Entry<K,V> before, after; Entry(int hash, K key, V value, Node<K,V> next) { super(hash, key, value, next); } }
LinkedHashMap also stores head and tail of this double linked list and thats how it maintains the insertion order. So even though put follows hashing storage and retrieval order is maintained using doubly linked list.
/** * The head (eldest) of the doubly linked list. */ transient LinkedHashMap.Entry<K,V> head; /** * The tail (youngest) of the doubly linked list. */ transient LinkedHashMap.Entry<K,V> tail;
So whenever you add a new key, value pair following happens -
Node<K,V> newNode(int hash, K key, V value, Node<K,V> e) { LinkedHashMap.Entry<K,V> p = new LinkedHashMap.Entry<K,V>(hash, key, value, e); linkNodeLast(p); return p; } // link at the end of list private void linkNodeLast(LinkedHashMap.Entry<K,V> p) { LinkedHashMap.Entry<K,V> last = tail; tail = p; if (last == null) head = p; else { p.before = last; last.after = p; } }
When iterating it iterates from head to tail thereby providing same order as insertion.
NOTE : before and after pointers are in addition to the next pointer which is inherited from HashMap.Node class. So next points to next node having same hash (collision scenario) thereby preserving O(1) lookups. Before and After pointers guarantee insertion lookup order.
NOTE : before and after pointers are in addition to the next pointer which is inherited from HashMap.Node class. So next points to next node having same hash (collision scenario) thereby preserving O(1) lookups. Before and After pointers guarantee insertion lookup order.
Understanding TreeMap
TreeMap as you already know stores the data in sorted order. If the data it stores implements Comparable interface then it stores data in that natural order or you can pass a custom comparator to the TreeMap and it will use that to sort and store the data.
TreeMap maintains a binary search tree structure. It has a root, value less than root go to left where as value greater than root go to right and it's balanced too (RBT - A red–black tree is a kind of self-balancing binary search tree).
/** * The comparator used to maintain order in this tree map, or * null if it uses the natural ordering of its keys. * * @serial */ private final Comparator<? super K> comparator; private transient Entry<K,V> root;
Simple code snippet would be -
Entry<K,V> e = new Entry<>(key, value, parent); if (cmp < 0) parent.left = e; else parent.right = e;
where cmp is the comparison value got either from comparators compare() method or Comparables compareTo() method.
Related Links
- How ConcurrentHashMap Works Internally in Java(OSFG)
-. | http://opensourceforgeeks.blogspot.com/2015_02_13_archive.html | CC-MAIN-2019-43 | en | refinedweb |
Background
Method overloading and method overriding are two important concepts in Java. It is very crucial to know the way they are work and the way they are used.
The main motive of this post is to understand Method overriding and it's usage. Lets quickly understand what both terms mean and move on to the overriding part for details.
Method OverloadingMethod or Function overloading means two or more functions with same name but different number and type of arguments(Signature). The purpose of this is to provide common functionality but in different possible scenarios. Example -
public void setEmployeeInfo(String name); public void setEmployeeInfo(String name, int age); public void setEmployeeInfo(String name, int age, String id);
Name of the function remains the same but the number and type of arguments changes.
Note : return type has noting to do with overloading. In fact return type of a method is not a part of it's signature. Only the function name and the arguments form a part of method signature. Also as we know that two methods with same signature are not allowed following scenario is not allowed -
public String getEmployee(String name); public Employee getEmployee(String name);
Note : Also keep in mind method overloading is a compile time phenomenon. Which if the overloaded method is to be executed is decided at compile time based the method signature.
Method overloading has some resolution rules that decides which method to pick up at compile time. Compilation will fail if no methods are matched. For example consider following methods -
- public void getData(String dataName)
- public void getData(Object dataName)
Method Overriding
Method overriding comes into picture only when there is inheritance involved. If there is a method in a super class then a sub class can override it. Let us first see an example and then dive into various aspects of it.
Lets say we have a Animal class in a package called worldEnteties. It has a move method as follows.
package worldEnteties; public class Animal { public void move() { System.out.println("Animal is moving"); } }
Also we have a Dog class which extends this Animal class. In this Dog class we override the move() method to do some different movement(Dog specific).
package worldEnteties;
public class Dog extends Animal { @Override public void move(){ System.out.println("Dog is moving"); } }
Now when you create a normal Animal object and call move() on it, it will simply execute the function in Animal class.
Code :
public static void main(String args[]) { Animal animal = new Animal(); animal.move(); }
Output :
Animal is moving
Now when we create a Dog object and call move() on it, it will execute the corresponding move() function.
Code :
public static void main(String args[]) { Dog dog = new Dog(); dog.move(); }
Output :
Dog is moving
Now the question may arise -
Q ) What if I wast to invoke move() in Animal when move() in Dog is invoked? After all Dog is an Animal.
Ans ) The answer is you can call super.move() inside the overridden function to call corresponding function in the superclass. Yes it happens by default in constructors but in normal functions you have to explicitly invoke super.functionName() call. A sample code would be as follows -
Modify the overridden move() method in Dog class as follows
package worldEnteties; public class Dog extends Animal { @Override public void move(){ super.move(); System.out.println("Dog is moving"); } }
Now execute the following code and examine the results -
Code :
public static void main(String args[]) { Dog dog = new Dog(); dog.move(); }
Output :
Animal is moving
Dog is moving
Dog is moving
Method Overriding is a runtime phenomenon!!!!
So far so good!! Now lets involve polymorphism in it and lets see what makes method overriding a runtime phenomenon.[Always remember polymorphism as superclass reference to subclass object] Consider the following code(No super call just plain simple inheritance and explained in the 1st case above) -
code :
public static void main(String args[]) { Animal dog = new Dog(); dog.move(); }
Output:
Dog is moving
How it works?
At compile time all that java compiler knows is the reference type Animal in our case. All compiler does is check whether move() method is present(or at-least declared) in the Animal class(If not compilation error will occur). As in our case it is present and code compiles fine. Now When we run the code, it is at this time the JVM will know the runtime class of the object [You can also print it using dog.getClass()] which is class Dog in our case. Now JVM will execute the function in Dog and we get our output.
Code Snap :
Another important point to note is that the access modifier of the overriding function can be liberal! Meaning if access modifier of super class is private than modifier of overriding method is subclass can be default(no modifier), protected or public. It cannot be the other way around(overriding method cannot have more strict access modifier).
Rules for method overriding
For overriding, the overridden method has a few rules:
- The access modifier must be the same or more accessible.
- The return type must be the same or a more restrictive type, also known as covariant return types.
- If any checked exceptions are thrown, only the same exceptions or subclasses of those
exceptions are allowed to be thrown.
- The methods must not be static. (If they are, the method is hidden and not
overridden.) | http://opensourceforgeeks.blogspot.com/2013_10_11_archive.html | CC-MAIN-2019-43 | en | refinedweb |
Contains utilization and resource usage statistics for the lifetime of a pool.
public class PoolStatistics
type PoolStatistics = class
Public Class PoolStatistics
Gets the time at which the statistics were last updated. All statistics are limited to the range between StartTime and this value.
Gets statistics related to resource consumption by compute nodes in the pool, such as average CPU utilization.
Gets the start time of the time range covered by the statistics.
Gets the URL for the statistics.
Gets statistics related to pool usage, such as the amount of core-time used.
Thank you. | https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.batch.poolstatistics?view=azure-dotnet | CC-MAIN-2019-43 | en | refinedweb |
Verifly v0.2
This gem consists of several dependent components, which all could be used standalone. The most important one is Verifier, but understanding Applicator and ApplicatorWithOptions helps understand its API. The least important one, ClassBuilder, is only used in private APIs, but its own API is public
Instalation
$ gem install verifly
and then in code
require 'verifly'
ClassBuilder
example:
Abstract = Struct.new(:data) extend Verifly::ClassBuilder::Mixin class WithString < self def self.build_class(x) self if x.is_a?(String) end end Generic = Class.new(self) self.buildable_classes = [WithString, Generic] # or, vice versa def self.buildable_classes [WithString, Generic] end end Abstract.build("foo") # => WithString.new("foo") Abstract.build(:foo) # => Generic.new("foo")
or see lib/verifier/applicator.rb
Why don't just use Uber::Builder? (Uber is cool, you should try it) There are two reasons: firstly, it is an unnecessary dependency. We dont want npm hell, do we? Uber::Builder realy does not do much work, it's just a pattern. Secondly, this implementation looks more clear to me, because children are deciding whether they will handle arguments, not parents.
So to use it, you have to:
Write some classes with duck type
.class_builder(*args)
Invoke
Verifly::ClassBuilder.new([<%= array_of_classes %>]).call(*args)
????
PROFIT
It's simple and clear, but not very sugary. So, otherwise, you may do following:
Write an abstract class
Extend
Verifly::ClassBuilder::Mixin
Inherit from the abstract class in different implementations
If some implementations have common ancestors (not including the abstract class), you can implement common ancestor's
.build_classin terms of super (i.e.
def self.build_class(x); super if x.is_a?(String); end)
Change
.build_classof other classes like
self if .... Don't change default implementation's
.build_class
Setup
.buildable_classeson the abstract class, mentioning only direct chldren if you done step 4
Optionally redefine
.buildin abstract class, if you want to separate
build_classand constructor params
Use
.buildinstead of
new
Applicator
Applicator is designed to wrap applications of applicable objects around some binding in some context
example:
object = OpenStruct.new(foo: :bar) Applicator.call(:foo, object, {}) # => :bar Applicator.call('foo', object, {}) # => :bar Applicator.call('context', object, {}) # => {} Applicator.call(-> { foo }, object, {}) # => :bar Applicator.call(->(context) { context[foo] }, object, bar: :baz) # => :baz Applicator.call(true, object, {}) # => true foo = :bar Applicator.call(:foo, binding, {}) # => :bar Applicator.call('object.foo', binding, {}) # => :bar
Applicator is good, but in most cases ApplicatorWithOptions would be a better option.
ApplicatorWithOptions
ApplicatorWithOptions is an applicator with options.
The options are
if: and
unless:. Same as in ActiveModel::Validations,
they are applied to the same binding. Main action is executed
only if
if: evaluates to truthy and
unless: evaluates to falsey.
See examples:
ApplicatorWithOptions.new(:foo, if: -> { true }).call(binding, {}) # => foo ApplicatorWithOptions.new(:foo, if: -> (context) { context[:bar] }) .call(binding, { bar: true }) # => foo ApplicatorWithOptions.new(:foo, if: { bar: true }).call(binding, :bar) # => foo ApplicatorWithOptions.new(:foo, unless: -> { true }) .call(binding, {}) # => nil ApplicatorWithOptions.new(:foo, unless: -> (context) { context[:bar] }) .call(binding, { bar: true }) # => foo ApplicatorWithOptions.new(:foo, unless: { bar: true }) .call(binding, :bar) # => nil
Verifier
The last, but the most interesting component is Verifier.
Verifiers use ApplciatorWithOptions to execute generic procedures.
Procedures should call
message! if they want to yield something.
Note that you should implement
message! by yourself (in terms of super)
class MyVerifier < Verifly::Verifier Message = Struct.new(:text) verify :foo, if: { foo: true } private def message!(text) super { Message.new(text) } end def foo message!('Something is wrong') if Fixnum != Bignum end end
In addition to Applicator's power, you also can nest your verifiers to split the logic
class MyVerifier < Verifly::Verifier Message = Struct.new(:text) verify_with ChildVerifier, if: -> (context) { cotnext[:foo] } private def message!(text) super { Message.new(text) } end end class ChildVerifier < MyVerifier verify %q(message!("it's alive!")) end | https://www.rubydoc.info/github/umbrellio/verifly | CC-MAIN-2019-43 | en | refinedweb |
[
]
Stephe edited comment on WW-4804 at 6/19/17 1:43 PM:
-----------------------------------------------------
See update - had to try and trim it down. The real class is massive.
****
Probably found the source of the issue-
Uncaught ReferenceError: StrutsUtils is not defined.
Just need to work out why that isn't included...
was (Author: suipaste):
See update - had to try and trim it down. The real class is massive.
> inputtransferselect does not auto-select its elements
> -----------------------------------------------------
>
> Key: WW-4804
> URL:
> Project: Struts 2
> Issue Type: Bug
> Components: Core Tags
> Affects Versions: 2.5.10
> Reporter: Stephe
> Priority: Minor
> Labels: newbie
> Fix For: 2.5.next
>
>
> I assume that this is a bug though and I am using the tag correctly.
> I have been trying to use the inputtransferselect tag. I created the component in my
form and it renders on the page fine. However the documentation [link here|]
states that it "Will auto-select all its elements upon its containing form submission." However
based on my attempts this is not the case. You need to select each entry which is added for
them to come through to the action.
> Ideally I'd post all of the code but I'll need to cut it down because it's corporate
(hopefully that won't hide what's going wrong)
> Action class:
> {code:java}
> public class MyAction implements Preparable
> {
> private List<String> myList= new ArrayList<>();
> public List<String> getMyList()
> {
> return this.dependants;
> }
> public void setMyList(List<String> pDependants)
> {
> this.dependants = pDependants;
> }
> @Override
> public void prepare() throws Exception
> {
> // Populates my list from session memory, this works fine...
> }
> @Override
> @Actions({
> @Action(
> value = "/" + ActionPathConstants.SAVE,
> interceptorRefs = @InterceptorRef(
> value = "defaultSecurityStack",
> params = {
> "tokenSession.includeMethods",
> "*" }),
> results = {
> @Result(
> type = "redirectAction",
> name = ActionPathConstants.SUCCESS,
> location = ActionPathConstants.NEXT_PAGE),
> }) })
> public String save() throws Exception
> {
> // This maps the value of myList back to session memory
> }
> }
> {code}
> JSP file:
> {code:jsp}
> <s:form
> <s:inputtransferselect
> />
> </s:form>
> {code}
> Note that the from is submitted via javascript. Rather than by directly pressing a submit
button. Could this have any impact on how the auto select works?
--
This message was sent by Atlassian JIRA
(v6.4.14#64029) | http://mail-archives.apache.org/mod_mbox/struts-issues/201706.mbox/%3CJIRA.13080456.1497632395000.47753.1497879840211@Atlassian.JIRA%3E | CC-MAIN-2019-43 | en | refinedweb |
Help to understand an AttributeError in a polynomial ring
F. Chapoton recently wrote a program the behaviour of which I do not understand.
def fermat(n): q = polygen(ZZ, 'q') return sum(n ** j * binomial(n, j) * (-1) ** (i + n + j) * binomial(n - 2 - j + 1, i + 1) * q ** i for j in range(n - 1) for i in range(n - 1 - j))
Now consider:
v = fermat(5) print v.parent() print v.list()
This outputs
Univariate Polynomial Ring in q over Integer Ring [821, 181, 21, 1]
which is fine. However the loop
for n in (1..9): v = fermat(n) print v.parent() print v.list()
gives the errors:
AttributeError: 'int' object has no attribute 'parent' AttributeError: 'int' object has no attribute 'list'
What happens here?
For n=1, the sum is empty and by default this gives a python int. That is because I simplified my program for oeis. If you care, you need to add R=q.parent() and then use R.sum(...)
Thanks Frédéric. If you write your comment as an answer I will accept it. So all this has nothing to do with the preparser or range formats as suggested in kcrisman's answer. | https://ask.sagemath.org/question/36387/help-to-understand-an-attributeerror-in-a-polynomial-ring/?sort=latest | CC-MAIN-2019-43 | en | refinedweb |
hi (cant post links)
currently trying to create local notifications without a service , so registerd an alarm
var alarmIntent = new Intent(Android.App.Application.Context, typeof(AlarmReceiver)); var pending = PendingIntent.GetBroadcast(Android.App.Application.Context, 0, alarmIntent, PendingIntentFlags.Immutable); var mAlarmservice = Android.App.Application.AlarmService; var alarmManager = (AlarmManager)Android.App.Application.Context.GetSystemService(mAlarmservice); alarmManager.SetInexactRepeating(AlarmType.RtcWakeup, SystemClock.ElapsedRealtime()+3000,60*1000, pending);
have set the class to be a broadcastreciver
[BroadcastReceiver(Enabled =true, Exported = true)] public class AlarmReceiver : BroadcastReceiver
and setup the notification.
string message = "You have " + "TESTING" + " orders waiting for approval"; var title = "TEST"; var mintent = new Intent(context, typeof(MainActivity)); mintent.SetFlags(ActivityFlags.NewTask | ActivityFlags.ClearTask); Random random = new Random(); int pushCount = random.Next(9999 - 1000) + 1000; //for multiplepushnotifications var pendingIntent = PendingIntent.GetActivity(context, 0, mintent, PendingIntentFlags.CancelCurrent); var builder = new NotificationCompat.Builder(context, CHANNEL_ID) .SetAutoCancel(true) // Dismiss the notification from the notification area when the user clicks on it .SetContentIntent(pendingIntent) .SetContentTitle(title) // Set the title .SetSmallIcon(Resource.Drawable.ic_media_play_dark) // This is the icon to display .SetContentText(message); // the message to display. var notificationManager = NotificationManagerCompat.From(context); notificationManager.Notify(NOTIFICATION_ID, builder.Build());
So this works as long as the app is active. but stops running if I stop the app.
from what the internet says the OS is deleting the broadcastreciver , it could be because I am running a HuaWei phone.
how exactly do you prevent the OS from deleting the BroadcastReceiver, register it in android manifest.
either the context is wrong. or is it because I start it in xaml on a content page(interface to android class)
"receiver android:name=".AlarmReceiver"/" added this to android manifest but did nothing so probably wrong
add an intentfilter?
[BroadcastReceiver(Enabled = true)]
[IntentFilter(new[] { "com.xamarin.example.TEST" })] <-- how does this work..
dont understand this
statically-registering-a-broadcast-receiver-a-working-example
running a huawei phone
keep-broadcast-receiver-running-after-application-is-closed
So how in Xamarin do you create a local notification that runs when the app is offline without a service
Answers
App Onlineis the the app is killed or cannot connect the network?
If the app was killed, it cannot be achieved, all the threads of this app will be killed by android , you cannot create a local notification by this app.
If app just cannot connect the network, the app is running, you can create a local notification in the UI thread,
If you want to get the data from online.And you could use google play service, Firebase cloud messaging is a good choice..
not a push notification .
a local notification , as in it should wake upp using the alarm manager() (some how wake the OS up after on a timer)
check every 30min when app not running(online)
check if internet is on.
if so use a restAPi get som info.
if info found send a local notifikation.
currently only works if the application is on.
currently trying to figure out how to build a service that will run on phone allways.
and starts on boot and app start(also know as a batteri drainer). and taskkillers will remove it.
so repeating the question.. with modifications
how in Xamarin do you create a local notification that runs when the app is not running, without a service
Do you want to run this app in EMUI or native android? If run it in the native android, it cannot be achieved.
so the app has to be running .
for local(pull) notifications to work.
in native android.
no way around that?
(Implicit broadcast: none that can be subscribed to? ) ok found article about "kiss goodbye to your implicit BroadcastReceivers" so thats out. all about saving power.
so since I need a person to login in the morning,
get the 24hour restapi token, so app can check every 30 min ,
if there is anything to authorise. (so just dont turn the app off )
dont know how Push notifications work, dont really know how the authentication server works.
but since I cant rebuild it , firebase is out(develop-authentication-custom auth system.)
so going with dont turn the app off option..
so used to interupt timers , in smaller cpus. just set timer and wake up run code set timer etc etc.
a well thanks | https://forums.xamarin.com/discussion/comment/377516 | CC-MAIN-2019-43 | en | refinedweb |
ize.
If you are not using the CSRF filter, you also should inject the
CSRFAddToken and
CSRFCheck action wrappers to force adding a token or a CSRF check on a specific action. Otherwise the token will not be available.
import play.api.mvc._ import play.api.mvc.Results._ import play.filters.csrf._ import play.filters.csrf.CSRF.Token class CSRFController(components: ControllerComponents, addToken: CSRFAddToken, checkToken: CSRFCheck) extends AbstractController(components) { def getToken = addToken(Action { implicit request => val Token(name, value) = CSRF.getToken.get Ok(s"$name=$value") }) }
CSRFCheck action, and it performs the check. It should be added to all actions that accept session authenticated POST form submissions:
import play.api.mvc._ import play.filters.csrf._ def save = checkToken { Action { implicit req => // handle body Ok } }
The second action is the
CSRFAddToken action, it generates a CSRF token if not already present on the incoming request. It should be added to all actions that render forms:
import play.api.mvc._ import play.filters.csrf._ def form = addToken { Action { the boiler plate code necessary to write actions:
def save = postAction { // handle body Ok } def form = getAction { implicit req => Ok(views.html.itemsForm) }Using | https://www.playframework.com/documentation/2.8.0-M5/ScalaCsrf | CC-MAIN-2019-43 | en | refinedweb |
Xamarin team!
I am in need of a solution at current for an iOS focused solution (potentially future Android) which will allow an object from a local HTML file which is displayed within a UIWebView to be populated using JavaScript, but calculating the value via C#.
The HTML files which I will be utilizing have associating JavaScript files which include business logic that dictates on the HTML form on what is available to the user based off their selection. This is a critical point to the app (and a major reason why a Web View based application is being considered).
I have seen the Razor Hybrid project on the Xamarin Blog, however this is not a viable solution due to the fact that the parent data-layer MVC project is not PCL compliant.
Looking at the example for the JsBridge package, available on the Component Store (knowing fully aware that it is only for iOS), I do not directly see a way to interact/modify existing objects within the HTML file which is opened in the UIWebView.
For example, I wish to have a text-box in my HTML source code which the value to it gets populated through JavaScript via C# (and then later retrieve the value from the text-box). If it is possible, would someone be able to explain how? I have attempted to create a reference via ID of the object in the HTML source, but I have been unsuccessful up to this point in doing so.
I have also found in the Xamarin.Forms forum a topic speaking about the HybridWebView as part of the Xamarin.Forms.Labs work (currently this functionality is in alpha), and I was curious if there was someone who could speak on it and give a more detailed explanation of its functionality and if they could explain whether it could meet the criteria which is required?
Sources:
iOS
Android
Cross-Platform.
Answers
I can write a small sample using the HybridWebView. Have you looked at the sample?
@SKall, at this time, I have not loaded the sample code to see it in full detail. I was honestly somewhat apprehensive and unsure of its stability given that it is in "alpha" mode. Would you or anyone have any timeline as to when it would come out of "alpha", hit "beta", and then be in the next release of Xamarin.Forms? It may make those above more comfortable for me to tell them that I am using something as part of a major release to Xamarin (rather than test/lab work).
Either way, I can pull down the example code and see what it can/can't do, but I would certainly appreciate any insights you can offer on it.
It's a bit of a Catch-22 as I wrote the hybrid but I don't have much use for it (nor am I an expert in HTML/JS). I would need more people using it and reporting any issues so I could fix them as needed. It was a curiosity / proof-of-concept project so aside from vector graphics and charting I have not used it much myself. Maybe @MichaelRidland could comment more about its use? I have made some modifications on the Android portion since his article but I am not sure if he has pushed any of his modifications to the Labs branch.
Do you have any sample HTML pages you would like to render? If you have the basic HTML already in place it wouldn't take me more than a few minutes to create a sample solution with two-way communication..
Hi.
The code is merged over. That blog post mentioned is in depth I think.
I will be using it in production, the control isn't very complicated and it's open source so using it isn't a issue for me.
Thanks
Michael
Yes, I read your blog in detail and noticed your changes were the ones to the Func. Just FYI, if you inject a JSON serializer (such as Json.Net or ServiceStack) plugin to the hybrid you can use the CallJsFunction method with parameters. This way there is no need to manually serialize C# objects to JSON.
@Skall, thanks so much for your detailed responses, they've been insightful thus far. I'll take a look at your attached demo project and get back to you.
At this time, I do not have any raw HTML to show for this project (we're in the process of converting from Web Forms to MVC which must be completed prior to our mobile development if it should function the way we want it too.
@SKall, I think the example you provided will certainly help us to proceed. I marked your response with the example solution as the Answer, but it seems like when I did this, it removed the attached zip file after I did so? I'd like to keep it available to anyone who may stumble upon this thread and wishes to see the solution as well. (Perhaps add it back in another comment?)
You mention a JSON serializer, with such tie-ins with Json.NET and not having to manually serialize? This has the potential to being beneficial to this project as well. I looked at the source you linked, but I'd like further explanation, if possible.
It looks as though you pass a JavaScript function name and then an array of objects, which would be the attribute/values from the JSON stream... would that be an accurate assessment? That said, I'm uncertain as to how the objects from the stream would be tied to an HTML object.
What happens if the names are different from the HTML to the JSON stream, like the example below? (Without being clever and ignoring "animal" in the id value of the HTML, this is just to represent a difference in naming between the two.)
{"dog":[ {"type":"Labrador", "name":"Lady", "age":"3"} ]}
<html> <input id ="animalType", <br /> <input id="animalName", <br /> <input id="animalAge", </html>
The function will use the selected JSON serializer to serialize the C# objects to JavaScript here:
XLabs has two plugins for JSON, Json.NET & ServiceStack. I used the latter on the sample. You can tweak the C# names to JS names with DataMemberAttribute.
Thanks Skall this is what I need i.e. a bare bones implementation that works. Thanks for all your hard work on this!
I downloaded the sample project, but It doesnt seem to load the file. Is there something that changed with recent updates? I was trying to see a working sample implementing this functionality.
There was a refactoring to better separate functionality. The project is now under XLabs instead of Xamarin.Forms.Labs. You can take the HTML & the page code from the sample and put into a new project. Install package XLabs.Forms & XLabs.Serialization.ServiceStackV3 and fix the namespace in the sample code and it should work. | https://forums.xamarin.com/discussion/comment/82353/ | CC-MAIN-2019-43 | en | refinedweb |
Hide Forgot
Version-Release number of selected component:
linphone-3.6.1-10.fc23
Additional info:
reporter: libreport-2.6.4
backtrace_rating: 4
cmdline: linphone
crash_function: sal_address_as_string
executable: /usr/bin/linphone
global_pid: 22195
kernel: 4.6.5-200.fc23.x86_64
runlevel: N 5
type: CCpp
uid: 1000
Truncated backtrace:
Thread no. 1 (8 frames)
#0 sal_address_as_string at sal_eXosip2.c:2546
#1 linphone_gtk_notify at main.c:1122
#2 linphone_gtk_call_state_changed at main.c:1247
#3 linphone_call_set_state at linphonecall.c:712
#4 sal_iterate at sal_eXosip2.c:1316
#7 linphone_core_iterate at linphonecore.c:2106
#8 linphone_gtk_iterate at main.c:557
#14 gtk_main at gtkmain.c:1268
Created attachment 1191279 [details]
File: backtrace
Created attachment 1191280 [details]
File: cgroup
Created attachment 1191281 [details]
File: core_backtrace
Created attachment 1191282 [details]
File: dso_list
Created attachment 1191283 [details]
File: environ
Created attachment 1191284 [details]
File: exploitable
Created attachment 1191285 [details]
File: limits
Created attachment 1191286 [details]
File: maps
Created attachment 1191287 [details]
File: mountinfo
Created attachment 1191288 [details]
File: namespaces
Created attachment 1191289 [details]
File: open_fds
Created attachment 1191290 [details]
File: proc_pid_status
Created attachment 1191291 . | https://partner-bugzilla.redhat.com/show_bug.cgi?id=1367469 | CC-MAIN-2019-43 | en | refinedweb |